You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Lumi team,
For the last 4 weeks are running various iteration of training with our custom data set. The setup to run the training is good. Till date we had tried
Using Fastrcnn and SSD.
Training with one class of upto 320 images. We had gradually increased it form 80 to 320. (high quality images around 2mb in size, mostly consumable products which are found in super market)
Using csv file format and using xmin,xmax,ymin,ymax format for annotation.
Version
Lumi : latest version
Python : Python 3.6.5 :: Anaconda, Inc.
Tensor flow : 1.10.0
aws : ec2 Linux 4.4.0-1075-aws
using the recommended values in the config.yml
learning_rate:
decay_method: piecewise_constant
Hi Lumi team,
For the last 4 weeks are running various iteration of training with our custom data set. The setup to run the training is good. Till date we had tried
Lumi : latest version
Python : Python 3.6.5 :: Anaconda, Inc.
Tensor flow : 1.10.0
aws : ec2 Linux 4.4.0-1075-aws
learning_rate:
decay_method: piecewise_constant
Custom dataset for Luminoth Tutorial
boundaries: [1000000,1200000]values: [0.0001,0.0001,0.00001]
Issue
We can see a lot of potential with lumi, need your suggestion/hints on data set size / config file changes.
Thanks
Kani
The text was updated successfully, but these errors were encountered: