java - Need some help for deeplearning4j single RBM usage -


i have bunch of sensors , want reconstruct input.

so want this:

  1. after have trained model pass in feature matrix
  2. get reconstructed feature matrix back
  3. i want investigate sensor values different reconstructed value

therefore thought rbm right choice , since used java, have tried use deeplearning4j. got stuck early. if run following code, facing 2 problems.

  1. the result far away correct prediction, of them [1.00,1.00,1.00].

  2. i expect 4 values (which number of inputs expected reconstructed)

so have tune a) better result , b) reconstructed inputs back?

public static void main(string[] args) {     // customizing params     nd4j.max_slices_to_print = -1;     nd4j.max_elements_per_slice = -1;     nd4j.enforce_numerical_stability = true;     final int numrows = 4;     final int numcolumns = 1;     int outputnum = 3;     int numsamples = 150;     int batchsize = 150;     int iterations = 100;     int seed = 123;     int listenerfreq = iterations/5;      datasetiterator iter = new irisdatasetiterator(batchsize, numsamples);      // loads data generator , format consumable nn     dataset iris = iter.next();     iris.normalize();     //iris.scale();     system.out.println(iris.getfeaturematrix());      neuralnetconfiguration conf = new neuralnetconfiguration.builder()             // gaussian visible; rectified hidden             // set contrastive divergence 1             .layer(new rbm.builder()                     .nin(numrows * numcolumns) // input nodes                     .nout(outputnum) // output nodes                     .activation("tanh") // activation function type                     .weightinit(weightinit.xavier) // weight initialization                     .lossfunction(lossfunctions.lossfunction.xent)                     .updater(updater.nesterovs)                     .build())             .seed(seed) // locks in weight initialization tuning             .iterations(iterations)             .learningrate(1e-1f) // backprop step size             .momentum(0.5) // speed of modifying learning rate             .optimizationalgo(optimizationalgorithm.stochastic_gradient_descent) // ^^ calculates gradients             .build();      layer model = layerfactories.getfactory(conf.getlayer()).create(conf);     model.setlisteners(arrays.aslist((iterationlistener) new scoreiterationlistener(listenerfreq)));      model.fit(iris.getfeaturematrix());     system.out.println(model.activate(iris.getfeaturematrix(), false)); } 

for b), when call activate(), list of "nlayers" arrays. every array in list activation 1 layer. array composed of rows: 1 row per input vector; each column contains activation every neuron in layer , observation (input). once layers have been activated input, can reconstruction rbm.propdown() method.

as a), i'm afraid it's tricky train correctly rbm. want play every parameter, , more importantly, monitor during training various metrics give hint whether it's training correctly or not. personally, plot:

  • the score() on training corpus, reconstruction error after every gradient update; check decreases.
  • the score() on development corpus: useful warned when overfitting occurs;
  • the norm of parameter vector: has large impact on score
  • both activation maps (= xy rectangular plot of activated neurons of 1 layer on corpus), after initialization , after n steps: helps detecting unreliable training (e.g.: when black/white, when large part of neurons never activated, etc.)

Comments

Popular posts from this blog

html - Outlook 2010 Anchor (url/address/link) -

javascript - Why does running this loop 9 times take 100x longer than running it 8 times? -

Getting gateway time-out Rails app with Nginx + Puma running on Digital Ocean -