Binary classification with PyMVA¶
In [1]:
import ROOT
Welcome to JupyROOT 6.09/01
In [2]:
# Select Theano as backend for Keras
from os import environ
environ['KERAS_BACKEND'] = 'theano'
# Set architecture of system (AVX instruction set is not supported on SWAN)
environ['THEANO_FLAGS'] = 'gcc.cxxflags=-march=corei7'
from keras.models import Sequential
from keras.layers.core import Dense, Dropout
from keras.optimizers import Adam
Using Theano backend.
Load data¶
In [3]:
# Open file
data = ROOT.TFile.Open('https://raw.githubusercontent.com/iml-wg/tmvatutorials/master/inputdata.root')
# Get signal and background trees from file
signal = data.Get('TreeS')
background = data.Get('TreeB')
# Add variables to dataloader
dataloader = ROOT.TMVA.DataLoader('dataset_pymva')
numVariables = len(signal.GetListOfBranches())
for branch in signal.GetListOfBranches():
dataloader.AddVariable(branch.GetName())
# Add trees to dataloader
dataloader.AddSignalTree(signal, 1.0)
dataloader.AddBackgroundTree(background, 1.0)
trainTestSplit = 0.8
dataloader.PrepareTrainingAndTestTree(ROOT.TCut(''),
'TrainTestSplit_Signal={}:'.format(trainTestSplit)+\
'TrainTestSplit_Background={}:'.format(trainTestSplit)+\
'SplitMode=Random')
DataSetInfo : [dataset_pymva] : Added class "Signal" : Add Tree TreeS of type Signal with 6000 events DataSetInfo : [dataset_pymva] : Added class "Background" : Add Tree TreeB of type Background with 6000 events : Dataset[dataset_pymva] : Class index : 0 name : Signal : Dataset[dataset_pymva] : Class index : 1 name : Background
Set up TMVA¶
In [4]:
# Setup TMVA
ROOT.TMVA.Tools.Instance()
ROOT.TMVA.PyMethodBase.PyInitialize()
outputFile = ROOT.TFile.Open('TMVAOutputPyMVA.root', 'RECREATE')
factory = ROOT.TMVA.Factory('TMVAClassification', outputFile,
'!V:!Silent:Color:DrawProgressBar:Transformations=I,G:'+\
'AnalysisType=Classification')
Define model for Keras¶
In [5]:
# Define model
model = Sequential()
model.add(Dense(32, init='glorot_normal', activation='relu',
input_dim=numVariables))
model.add(Dropout(0.5))
model.add(Dense(32, init='glorot_normal', activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, init='glorot_uniform', activation='softmax'))
# Set loss and optimizer
model.compile(loss='categorical_crossentropy', optimizer=Adam(),
metrics=['categorical_accuracy',])
# Store model to file
model.save('model.h5')
# Print summary of model
model.summary()
____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== dense_1 (Dense) (None, 32) 160 dense_input_1[0][0] ____________________________________________________________________________________________________ dropout_1 (Dropout) (None, 32) 0 dense_1[0][0] ____________________________________________________________________________________________________ dense_2 (Dense) (None, 32) 1056 dropout_1[0][0] ____________________________________________________________________________________________________ dropout_2 (Dropout) (None, 32) 0 dense_2[0][0] ____________________________________________________________________________________________________ dense_3 (Dense) (None, 2) 66 dropout_2[0][0] ==================================================================================================== Total params: 1282 ____________________________________________________________________________________________________
WARNING (theano.gof.cmodule): WARNING: your Theano flags `gcc.cxxflags` specify an `-march=X` flags. It is better to let Theano/g++ find it automatically, but we don't do it now WARNING:theano.gof.cmodule:WARNING: your Theano flags `gcc.cxxflags` specify an `-march=X` flags. It is better to let Theano/g++ find it automatically, but we don't do it now
Book methods¶
Just run the cells that contain the classifiers you want to try.
In [6]:
# Keras interface with previously defined model
factory.BookMethod(dataloader, ROOT.TMVA.Types.kPyKeras, 'PyKeras',
'H:!V:VarTransform=G:FilenameModel=model.h5:'+\
'NumEpochs=10:BatchSize=32:'+\
'TriesEarlyStopping=3')
Out[6]:
<ROOT.TMVA::MethodPyKeras object ("PyKeras") at 0x77e48b0>
Factory : Booking method: PyKeras
:
PyKeras : [dataset_pymva] : Create Transformation "G" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'var1' <---> Output : variable 'var1'
: Input : variable 'var2' <---> Output : variable 'var2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
: Load model from file: model.h5
In [7]:
# Gradient tree boosting from scikit-learn package
factory.BookMethod(dataloader, ROOT.TMVA.Types.kPyGTB, 'GTB',
'H:!V:VarTransform=None:'+\
'NEstimators=100:LearningRate=0.1:MaxDepth=3')
Out[7]:
<ROOT.TMVA::MethodPyGTB object ("GTB") at 0x77c0a30>
Factory : Booking method: GTB
:
DataSetFactory : [dataset_pymva] : Number of events in input trees
:
:
: Dataset[dataset_pymva] : Weight renormalisation mode: "EqualNumEvents": renormalises all event classes ...
: Dataset[dataset_pymva] : such that the effective (weighted) number of events in each class is the same
: Dataset[dataset_pymva] : (and equals the number of events (entries) given for class=0 )
: Dataset[dataset_pymva] : ... i.e. such that Sum[i=1..N_j]{w_i} = N_classA, j=classA, classB, ...
: Dataset[dataset_pymva] : ... (note that N_j is the sum of TRAINING events
: Dataset[dataset_pymva] : ..... Testing events are not renormalised nor included in the renormalisation factor!)
: Number of training and testing events
: ---------------------------------------------------------------------------
: Signal -- training events : 4800
: Signal -- testing events : 1200
: Signal -- training and testing events: 6000
: Background -- training events : 4800
: Background -- testing events : 1200
: Background -- training and testing events: 6000
:
DataSetInfo : Correlation matrix (Signal):
: ----------------------------------------
: var1 var2 var3 var4
: var1: +1.000 +0.379 +0.585 +0.813
: var2: +0.379 +1.000 +0.691 +0.727
: var3: +0.585 +0.691 +1.000 +0.848
: var4: +0.813 +0.727 +0.848 +1.000
: ----------------------------------------
DataSetInfo : Correlation matrix (Background):
: ----------------------------------------
: var1 var2 var3 var4
: var1: +1.000 +0.852 +0.914 +0.964
: var2: +0.852 +1.000 +0.925 +0.935
: var3: +0.914 +0.925 +1.000 +0.970
: var4: +0.964 +0.935 +0.970 +1.000
: ----------------------------------------
DataSetFactory : [dataset_pymva] :
:
/cvmfs/sft-nightlies.cern.ch/lcg/views/dev3/Sat/x86_64-slc6-gcc49-opt/lib/python2.7/site-packages/ipykernel/__main__.py:4: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. /cvmfs/sft-nightlies.cern.ch/lcg/views/dev3/Sat/x86_64-slc6-gcc49-opt/lib/python2.7/site-packages/ipykernel/__main__.py:4: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr.
Run training, testing and evaluation¶
In [8]:
factory.TrainAllMethods()
Factory : Train all methods Factory : [dataset_pymva] : Create Transformation "I" with events from all classes. : : Transformation, Variable selection : : Input : variable 'var1' <---> Output : variable 'var1' : Input : variable 'var2' <---> Output : variable 'var2' : Input : variable 'var3' <---> Output : variable 'var3' : Input : variable 'var4' <---> Output : variable 'var4' Factory : [dataset_pymva] : Create Transformation "G" with events from all classes. : : Transformation, Variable selection : : Input : variable 'var1' <---> Output : variable 'var1' : Input : variable 'var2' <---> Output : variable 'var2' : Input : variable 'var3' <---> Output : variable 'var3' : Input : variable 'var4' <---> Output : variable 'var4' : Preparing the Gaussian transformation... TFHandler_Factory : Variable Mean RMS [ Min Max ] : ----------------------------------------------------------- : var1: 0.0065519 0.99843 [ -3.1728 5.7307 ] : var2: 0.0068699 1.0010 [ -3.1728 5.7307 ] : var3: 0.0067702 1.0001 [ -3.1728 5.7307 ] : var4: 0.0066114 0.99911 [ -3.1728 5.7307 ] : ----------------------------------------------------------- : Ranking input variables (method unspecific)... Id_GaussTransformation : Ranking result (top variable is best ranked) : ----------------------------- : Rank : Variable : Separation : ----------------------------- : 1 : var4 : 3.445e-01 : 2 : var3 : 2.750e-01 : 3 : var1 : 2.670e-01 : 4 : var2 : 2.116e-01 : ----------------------------- Factory : Train method: PyKeras for Classification : : : ================================================================ : H e l p f o r M V A m e t h o d [ PyKeras ] : : : Keras is a high-level API for the Theano and Tensorflow packages. : This method wraps the training and predictions steps of the Keras : Python package for TMVA, so that dataloading, preprocessing and : evaluation can be done within the TMVA system. To use this Keras : interface, you have to generate a model with Keras first. Then, : this model can be loaded and trained in TMVA. : : : <Suppress this message by specifying "!H" in the booking option> : ================================================================ : : Preparing the Gaussian transformation... TFHandler_PyKeras : Variable Mean RMS [ Min Max ] : ----------------------------------------------------------- : var1: 0.0065519 0.99843 [ -3.1728 5.7307 ] : var2: 0.0068699 1.0010 [ -3.1728 5.7307 ] : var3: 0.0067702 1.0001 [ -3.1728 5.7307 ] : var4: 0.0066114 0.99911 [ -3.1728 5.7307 ] : ----------------------------------------------------------- : Option SaveBestOnly: Only model weights with smallest validation loss will be stored : Option TriesEarlyStopping: Training will stop after 3 number of epochs with no improvement of validation loss Train on 9600 samples, validate on 2400 samples Epoch 1/10 32/9600 [..............................] - ETA: 0s - loss: 0.6905 - categorical_accuracy: 0.5000 288/9600 [..............................] - ETA: 0s - loss: 0.7063 - categorical_accuracy: 0.4826 544/9600 [>.............................] - ETA: 0s - loss: 0.7099 - categorical_accuracy: 0.4688 832/9600 [=>............................] - ETA: 0s - loss: 0.6988 - categorical_accuracy: 0.4820 1120/9600 [==>...........................] - ETA: 0s - loss: 0.6891 - categorical_accuracy: 0.5018 1440/9600 [===>..........................] - ETA: 0s - loss: 0.6737 - categorical_accuracy: 0.5285 1760/9600 [====>.........................] - ETA: 0s - loss: 0.6714 - categorical_accuracy: 0.5392 2112/9600 [=====>........................] - ETA: 0s - loss: 0.6642 - categorical_accuracy: 0.5497 2464/9600 [======>.......................] - ETA: 0s - loss: 0.6625 - categorical_accuracy: 0.5544 2848/9600 [=======>......................] - ETA: 0s - loss: 0.6587 - categorical_accuracy: 0.5558 3232/9600 [=========>....................] - ETA: 0s - loss: 0.6561 - categorical_accuracy: 0.5625 3616/9600 [==========>...................] - ETA: 0s - loss: 0.6538 - categorical_accuracy: 0.5741 4000/9600 [===========>..................] - ETA: 0s - loss: 0.6488 - categorical_accuracy: 0.5837 4384/9600 [============>.................] - ETA: 0s - loss: 0.6463 - categorical_accuracy: 0.5894 4768/9600 [=============>................] - ETA: 0s - loss: 0.6428 - categorical_accuracy: 0.5940 5152/9600 [===============>..............] - ETA: 0s - loss: 0.6400 - categorical_accuracy: 0.5990 5536/9600 [================>.............] - ETA: 0s - loss: 0.6360 - categorical_accuracy: 0.6059 5920/9600 [=================>............] - ETA: 0s - loss: 0.6345 - categorical_accuracy: 0.6098 6304/9600 [==================>...........] - ETA: 0s - loss: 0.6327 - categorical_accuracy: 0.6125 6688/9600 [===================>..........] - ETA: 0s - loss: 0.6283 - categorical_accuracy: 0.6183 7072/9600 [=====================>........] - ETA: 0s - loss: 0.6259 - categorical_accuracy: 0.6246 7456/9600 [======================>.......] - ETA: 0s - loss: 0.6230 - categorical_accuracy: 0.6302 7840/9600 [=======================>......] - ETA: 0s - loss: 0.6201 - categorical_accuracy: 0.6349 8224/9600 [========================>.....] - ETA: 0s - loss: 0.6173 - categorical_accuracy: 0.6378 8608/9600 [=========================>....] - ETA: 0s - loss: 0.6147 - categorical_accuracy: 0.6410 8992/9600 [===========================>..] - ETA: 0s - loss: 0.6119 - categorical_accuracy: 0.6449 9376/9600 [============================>.] - ETA: 0s - loss: 0.6087 - categorical_accuracy: 0.6496Epoch 00000: val_loss improved from inf to 0.53422, saving model to dataset_pymva/weights/TrainedModel_PyKeras.h5 9600/9600 [==============================] - 0s - loss: 0.6084 - categorical_accuracy: 0.6504 - val_loss: 0.5342 - val_categorical_accuracy: 0.7600 Epoch 2/10 32/9600 [..............................] - ETA: 0s - loss: 0.4797 - categorical_accuracy: 0.8750 256/9600 [..............................] - ETA: 0s - loss: 0.5558 - categorical_accuracy: 0.7305 480/9600 [>.............................] - ETA: 0s - loss: 0.5601 - categorical_accuracy: 0.7229 736/9600 [=>............................] - ETA: 0s - loss: 0.5509 - categorical_accuracy: 0.7351 992/9600 [==>...........................] - ETA: 0s - loss: 0.5484 - categorical_accuracy: 0.7339 1280/9600 [===>..........................] - ETA: 0s - loss: 0.5494 - categorical_accuracy: 0.7328 1568/9600 [===>..........................] - ETA: 0s - loss: 0.5426 - categorical_accuracy: 0.7392 1888/9600 [====>.........................] - ETA: 0s - loss: 0.5421 - categorical_accuracy: 0.7389 2240/9600 [======>.......................] - ETA: 0s - loss: 0.5385 - categorical_accuracy: 0.7366 2592/9600 [=======>......................] - ETA: 0s - loss: 0.5385 - categorical_accuracy: 0.7353 2976/9600 [========>.....................] - ETA: 0s - loss: 0.5377 - categorical_accuracy: 0.7376 3360/9600 [=========>....................] - ETA: 0s - loss: 0.5378 - categorical_accuracy: 0.7363 3744/9600 [==========>...................] - ETA: 0s - loss: 0.5358 - categorical_accuracy: 0.7350 4128/9600 [===========>..................] - ETA: 0s - loss: 0.5314 - categorical_accuracy: 0.7364 4512/9600 [=============>................] - ETA: 0s - loss: 0.5311 - categorical_accuracy: 0.7369 4896/9600 [==============>...............] - ETA: 0s - loss: 0.5307 - categorical_accuracy: 0.7355 5280/9600 [===============>..............] - ETA: 0s - loss: 0.5276 - categorical_accuracy: 0.7369 5664/9600 [================>.............] - ETA: 0s - loss: 0.5252 - categorical_accuracy: 0.7401 6048/9600 [=================>............] - ETA: 0s - loss: 0.5215 - categorical_accuracy: 0.7434 6432/9600 [===================>..........] - ETA: 0s - loss: 0.5220 - categorical_accuracy: 0.7447 6816/9600 [====================>.........] - ETA: 0s - loss: 0.5211 - categorical_accuracy: 0.7444 7200/9600 [=====================>........] - ETA: 0s - loss: 0.5214 - categorical_accuracy: 0.7426 7584/9600 [======================>.......] - ETA: 0s - loss: 0.5187 - categorical_accuracy: 0.7447 7968/9600 [=======================>......] - ETA: 0s - loss: 0.5164 - categorical_accuracy: 0.7454 8352/9600 [=========================>....] - ETA: 0s - loss: 0.5160 - categorical_accuracy: 0.7460 8736/9600 [==========================>...] - ETA: 0s - loss: 0.5157 - categorical_accuracy: 0.7461 9120/9600 [===========================>..] - ETA: 0s - loss: 0.5129 - categorical_accuracy: 0.7485 9504/9600 [============================>.] - ETA: 0s - loss: 0.5125 - categorical_accuracy: 0.7488Epoch 00001: val_loss improved from 0.53422 to 0.43809, saving model to dataset_pymva/weights/TrainedModel_PyKeras.h5 9600/9600 [==============================] - 0s - loss: 0.5119 - categorical_accuracy: 0.7492 - val_loss: 0.4381 - val_categorical_accuracy: 0.7975 Epoch 3/10 32/9600 [..............................] - ETA: 0s - loss: 0.4450 - categorical_accuracy: 0.7812 256/9600 [..............................] - ETA: 0s - loss: 0.4997 - categorical_accuracy: 0.7461 480/9600 [>.............................] - ETA: 0s - loss: 0.4868 - categorical_accuracy: 0.7500 736/9600 [=>............................] - ETA: 0s - loss: 0.4805 - categorical_accuracy: 0.7432 992/9600 [==>...........................] - ETA: 0s - loss: 0.4888 - categorical_accuracy: 0.7389 1280/9600 [===>..........................] - ETA: 0s - loss: 0.4874 - categorical_accuracy: 0.7453 1568/9600 [===>..........................] - ETA: 0s - loss: 0.4897 - categorical_accuracy: 0.7436 1888/9600 [====>.........................] - ETA: 0s - loss: 0.4862 - categorical_accuracy: 0.7505 2208/9600 [=====>........................] - ETA: 0s - loss: 0.4880 - categorical_accuracy: 0.7532 2560/9600 [=======>......................] - ETA: 0s - loss: 0.4887 - categorical_accuracy: 0.7570 2912/9600 [========>.....................] - ETA: 0s - loss: 0.4908 - categorical_accuracy: 0.7552 3296/9600 [=========>....................] - ETA: 0s - loss: 0.4893 - categorical_accuracy: 0.7549 3680/9600 [==========>...................] - ETA: 0s - loss: 0.4859 - categorical_accuracy: 0.7568 4064/9600 [===========>..................] - ETA: 0s - loss: 0.4841 - categorical_accuracy: 0.7589 4448/9600 [============>.................] - ETA: 0s - loss: 0.4830 - categorical_accuracy: 0.7592 4832/9600 [==============>...............] - ETA: 0s - loss: 0.4849 - categorical_accuracy: 0.7583 5216/9600 [===============>..............] - ETA: 0s - loss: 0.4808 - categorical_accuracy: 0.7623 5600/9600 [================>.............] - ETA: 0s - loss: 0.4783 - categorical_accuracy: 0.7638 5984/9600 [=================>............] - ETA: 0s - loss: 0.4786 - categorical_accuracy: 0.7639 6368/9600 [==================>...........] - ETA: 0s - loss: 0.4765 - categorical_accuracy: 0.7649 6752/9600 [====================>.........] - ETA: 0s - loss: 0.4741 - categorical_accuracy: 0.7667 7136/9600 [=====================>........] - ETA: 0s - loss: 0.4734 - categorical_accuracy: 0.7692 7520/9600 [======================>.......] - ETA: 0s - loss: 0.4744 - categorical_accuracy: 0.7684 7904/9600 [=======================>......] - ETA: 0s - loss: 0.4751 - categorical_accuracy: 0.7682 8288/9600 [========================>.....] - ETA: 0s - loss: 0.4747 - categorical_accuracy: 0.7683 8672/9600 [==========================>...] - ETA: 0s - loss: 0.4729 - categorical_accuracy: 0.7708 9056/9600 [===========================>..] - ETA: 0s - loss: 0.4707 - categorical_accuracy: 0.7723 9440/9600 [============================>.] - ETA: 0s - loss: 0.4697 - categorical_accuracy: 0.7728Epoch 00002: val_loss improved from 0.43809 to 0.40123, saving model to dataset_pymva/weights/TrainedModel_PyKeras.h5 9600/9600 [==============================] - 0s - loss: 0.4688 - categorical_accuracy: 0.7732 - val_loss: 0.4012 - val_categorical_accuracy: 0.8125 Epoch 4/10 32/9600 [..............................] - ETA: 0s - loss: 0.3780 - categorical_accuracy: 0.8438 352/9600 [>.............................] - ETA: 0s - loss: 0.4556 - categorical_accuracy: 0.7841 704/9600 [=>............................] - ETA: 0s - loss: 0.4431 - categorical_accuracy: 0.7969 1056/9600 [==>...........................] - ETA: 0s - loss: 0.4566 - categorical_accuracy: 0.7860 1440/9600 [===>..........................] - ETA: 0s - loss: 0.4553 - categorical_accuracy: 0.7868 1824/9600 [====>.........................] - ETA: 0s - loss: 0.4521 - categorical_accuracy: 0.7911 2208/9600 [=====>........................] - ETA: 0s - loss: 0.4540 - categorical_accuracy: 0.7849 2592/9600 [=======>......................] - ETA: 0s - loss: 0.4483 - categorical_accuracy: 0.7901 2976/9600 [========>.....................] - ETA: 0s - loss: 0.4512 - categorical_accuracy: 0.7863 3360/9600 [=========>....................] - ETA: 0s - loss: 0.4470 - categorical_accuracy: 0.7881 3744/9600 [==========>...................] - ETA: 0s - loss: 0.4470 - categorical_accuracy: 0.7874 4128/9600 [===========>..................] - ETA: 0s - loss: 0.4500 - categorical_accuracy: 0.7868 4512/9600 [=============>................] - ETA: 0s - loss: 0.4472 - categorical_accuracy: 0.7888 4896/9600 [==============>...............] - ETA: 0s - loss: 0.4482 - categorical_accuracy: 0.7880 5280/9600 [===============>..............] - ETA: 0s - loss: 0.4487 - categorical_accuracy: 0.7881 5664/9600 [================>.............] - ETA: 0s - loss: 0.4499 - categorical_accuracy: 0.7876 6048/9600 [=================>............] - ETA: 0s - loss: 0.4476 - categorical_accuracy: 0.7892 6432/9600 [===================>..........] - ETA: 0s - loss: 0.4471 - categorical_accuracy: 0.7896 6816/9600 [====================>.........] - ETA: 0s - loss: 0.4459 - categorical_accuracy: 0.7902 7200/9600 [=====================>........] - ETA: 0s - loss: 0.4446 - categorical_accuracy: 0.7908 7584/9600 [======================>.......] - ETA: 0s - loss: 0.4444 - categorical_accuracy: 0.7913 7968/9600 [=======================>......] - ETA: 0s - loss: 0.4431 - categorical_accuracy: 0.7923 8352/9600 [=========================>....] - ETA: 0s - loss: 0.4431 - categorical_accuracy: 0.7920 8736/9600 [==========================>...] - ETA: 0s - loss: 0.4427 - categorical_accuracy: 0.7918 9120/9600 [===========================>..] - ETA: 0s - loss: 0.4423 - categorical_accuracy: 0.7927 9504/9600 [============================>.] - ETA: 0s - loss: 0.4404 - categorical_accuracy: 0.7942Epoch 00003: val_loss improved from 0.40123 to 0.37674, saving model to dataset_pymva/weights/TrainedModel_PyKeras.h5 9600/9600 [==============================] - 0s - loss: 0.4401 - categorical_accuracy: 0.7946 - val_loss: 0.3767 - val_categorical_accuracy: 0.8421 Epoch 5/10 32/9600 [..............................] - ETA: 0s - loss: 0.4335 - categorical_accuracy: 0.7812 320/9600 [>.............................] - ETA: 0s - loss: 0.4073 - categorical_accuracy: 0.7969 608/9600 [>.............................] - ETA: 0s - loss: 0.4094 - categorical_accuracy: 0.8043 928/9600 [=>............................] - ETA: 0s - loss: 0.3998 - categorical_accuracy: 0.8017 1248/9600 [==>...........................] - ETA: 0s - loss: 0.4069 - categorical_accuracy: 0.8013 1600/9600 [====>.........................] - ETA: 0s - loss: 0.4028 - categorical_accuracy: 0.8037 1952/9600 [=====>........................] - ETA: 0s - loss: 0.4012 - categorical_accuracy: 0.8053 2336/9600 [======>.......................] - ETA: 0s - loss: 0.4063 - categorical_accuracy: 0.8027 2720/9600 [=======>......................] - ETA: 0s - loss: 0.4060 - categorical_accuracy: 0.8037 3104/9600 [========>.....................] - ETA: 0s - loss: 0.4090 - categorical_accuracy: 0.8038 3488/9600 [=========>....................] - ETA: 0s - loss: 0.4123 - categorical_accuracy: 0.8013 3872/9600 [===========>..................] - ETA: 0s - loss: 0.4131 - categorical_accuracy: 0.8011 4256/9600 [============>.................] - ETA: 0s - loss: 0.4174 - categorical_accuracy: 0.7979 4640/9600 [=============>................] - ETA: 0s - loss: 0.4233 - categorical_accuracy: 0.7946 5024/9600 [==============>...............] - ETA: 0s - loss: 0.4207 - categorical_accuracy: 0.7970 5408/9600 [===============>..............] - ETA: 0s - loss: 0.4209 - categorical_accuracy: 0.7984 5792/9600 [=================>............] - ETA: 0s - loss: 0.4212 - categorical_accuracy: 0.7990 6176/9600 [==================>...........] - ETA: 0s - loss: 0.4212 - categorical_accuracy: 0.7983 6560/9600 [===================>..........] - ETA: 0s - loss: 0.4214 - categorical_accuracy: 0.7989 6944/9600 [====================>.........] - ETA: 0s - loss: 0.4207 - categorical_accuracy: 0.7990 7328/9600 [=====================>........] - ETA: 0s - loss: 0.4211 - categorical_accuracy: 0.7993 7712/9600 [=======================>......] - ETA: 0s - loss: 0.4222 - categorical_accuracy: 0.7995 8096/9600 [========================>.....] - ETA: 0s - loss: 0.4221 - categorical_accuracy: 0.8000 8480/9600 [=========================>....] - ETA: 0s - loss: 0.4237 - categorical_accuracy: 0.7993 8864/9600 [==========================>...] - ETA: 0s - loss: 0.4230 - categorical_accuracy: 0.8005 9248/9600 [===========================>..] - ETA: 0s - loss: 0.4240 - categorical_accuracy: 0.8005Epoch 00004: val_loss improved from 0.37674 to 0.37410, saving model to dataset_pymva/weights/TrainedModel_PyKeras.h5 9600/9600 [==============================] - 0s - loss: 0.4237 - categorical_accuracy: 0.8004 - val_loss: 0.3741 - val_categorical_accuracy: 0.8433 Epoch 6/10 32/9600 [..............................] - ETA: 0s - loss: 0.4064 - categorical_accuracy: 0.7500 288/9600 [..............................] - ETA: 0s - loss: 0.3817 - categorical_accuracy: 0.8333 544/9600 [>.............................] - ETA: 0s - loss: 0.3924 - categorical_accuracy: 0.8235 800/9600 [=>............................] - ETA: 0s - loss: 0.3906 - categorical_accuracy: 0.8337 1088/9600 [==>...........................] - ETA: 0s - loss: 0.3827 - categorical_accuracy: 0.8300 1376/9600 [===>..........................] - ETA: 0s - loss: 0.3858 - categorical_accuracy: 0.8263 1696/9600 [====>.........................] - ETA: 0s - loss: 0.3812 - categorical_accuracy: 0.8302 2016/9600 [=====>........................] - ETA: 0s - loss: 0.3880 - categorical_accuracy: 0.8274 2368/9600 [======>.......................] - ETA: 0s - loss: 0.3937 - categorical_accuracy: 0.8231 2752/9600 [=======>......................] - ETA: 0s - loss: 0.3972 - categorical_accuracy: 0.8187 3136/9600 [========>.....................] - ETA: 0s - loss: 0.3950 - categorical_accuracy: 0.8195 3520/9600 [==========>...................] - ETA: 0s - loss: 0.3991 - categorical_accuracy: 0.8168 3904/9600 [===========>..................] - ETA: 0s - loss: 0.3988 - categorical_accuracy: 0.8181 4288/9600 [============>.................] - ETA: 0s - loss: 0.4036 - categorical_accuracy: 0.8176 4672/9600 [=============>................] - ETA: 0s - loss: 0.4035 - categorical_accuracy: 0.8172 5056/9600 [==============>...............] - ETA: 0s - loss: 0.4027 - categorical_accuracy: 0.8198 5440/9600 [================>.............] - ETA: 0s - loss: 0.4024 - categorical_accuracy: 0.8206 5824/9600 [=================>............] - ETA: 0s - loss: 0.4023 - categorical_accuracy: 0.8211 6208/9600 [==================>...........] - ETA: 0s - loss: 0.4052 - categorical_accuracy: 0.8196 6592/9600 [===================>..........] - ETA: 0s - loss: 0.4059 - categorical_accuracy: 0.8193 6976/9600 [====================>.........] - ETA: 0s - loss: 0.4077 - categorical_accuracy: 0.8185 7360/9600 [======================>.......] - ETA: 0s - loss: 0.4076 - categorical_accuracy: 0.8187 7744/9600 [=======================>......] - ETA: 0s - loss: 0.4078 - categorical_accuracy: 0.8190 8128/9600 [========================>.....] - ETA: 0s - loss: 0.4080 - categorical_accuracy: 0.8189 8512/9600 [=========================>....] - ETA: 0s - loss: 0.4083 - categorical_accuracy: 0.8188 8896/9600 [==========================>...] - ETA: 0s - loss: 0.4071 - categorical_accuracy: 0.8201 9280/9600 [============================>.] - ETA: 0s - loss: 0.4082 - categorical_accuracy: 0.8196Epoch 00005: val_loss improved from 0.37410 to 0.35731, saving model to dataset_pymva/weights/TrainedModel_PyKeras.h5 9600/9600 [==============================] - 0s - loss: 0.4079 - categorical_accuracy: 0.8198 - val_loss: 0.3573 - val_categorical_accuracy: 0.8433 Epoch 7/10 32/9600 [..............................] - ETA: 0s - loss: 0.3378 - categorical_accuracy: 0.9062 416/9600 [>.............................] - ETA: 0s - loss: 0.3649 - categorical_accuracy: 0.8413 800/9600 [=>............................] - ETA: 0s - loss: 0.3920 - categorical_accuracy: 0.8225 1184/9600 [==>...........................] - ETA: 0s - loss: 0.3766 - categorical_accuracy: 0.8319 1568/9600 [===>..........................] - ETA: 0s - loss: 0.3880 - categorical_accuracy: 0.8272 1952/9600 [=====>........................] - ETA: 0s - loss: 0.3945 - categorical_accuracy: 0.8274 2336/9600 [======>.......................] - ETA: 0s - loss: 0.4026 - categorical_accuracy: 0.8262 2720/9600 [=======>......................] - ETA: 0s - loss: 0.4034 - categorical_accuracy: 0.8235 3104/9600 [========>.....................] - ETA: 0s - loss: 0.4100 - categorical_accuracy: 0.8177 3488/9600 [=========>....................] - ETA: 0s - loss: 0.4120 - categorical_accuracy: 0.8162 3872/9600 [===========>..................] - ETA: 0s - loss: 0.4115 - categorical_accuracy: 0.8151 4256/9600 [============>.................] - ETA: 0s - loss: 0.4054 - categorical_accuracy: 0.8184 4640/9600 [=============>................] - ETA: 0s - loss: 0.4077 - categorical_accuracy: 0.8170 5024/9600 [==============>...............] - ETA: 0s - loss: 0.4069 - categorical_accuracy: 0.8189 5408/9600 [===============>..............] - ETA: 0s - loss: 0.4058 - categorical_accuracy: 0.8188 5792/9600 [=================>............] - ETA: 0s - loss: 0.4052 - categorical_accuracy: 0.8180 6176/9600 [==================>...........] - ETA: 0s - loss: 0.4042 - categorical_accuracy: 0.8187 6560/9600 [===================>..........] - ETA: 0s - loss: 0.4044 - categorical_accuracy: 0.8177 6944/9600 [====================>.........] - ETA: 0s - loss: 0.4033 - categorical_accuracy: 0.8187 7328/9600 [=====================>........] - ETA: 0s - loss: 0.4042 - categorical_accuracy: 0.8186 7712/9600 [=======================>......] - ETA: 0s - loss: 0.4049 - categorical_accuracy: 0.8191 8096/9600 [========================>.....] - ETA: 0s - loss: 0.4033 - categorical_accuracy: 0.8200 8480/9600 [=========================>....] - ETA: 0s - loss: 0.4022 - categorical_accuracy: 0.8203 8864/9600 [==========================>...] - ETA: 0s - loss: 0.4002 - categorical_accuracy: 0.8212 9248/9600 [===========================>..] - ETA: 0s - loss: 0.3998 - categorical_accuracy: 0.8212Epoch 00006: val_loss improved from 0.35731 to 0.34590, saving model to dataset_pymva/weights/TrainedModel_PyKeras.h5 9600/9600 [==============================] - 0s - loss: 0.4001 - categorical_accuracy: 0.8206 - val_loss: 0.3459 - val_categorical_accuracy: 0.8471 Epoch 8/10 32/9600 [..............................] - ETA: 0s - loss: 0.1671 - categorical_accuracy: 1.0000 256/9600 [..............................] - ETA: 0s - loss: 0.3897 - categorical_accuracy: 0.8516 480/9600 [>.............................] - ETA: 0s - loss: 0.4101 - categorical_accuracy: 0.8229 704/9600 [=>............................] - ETA: 0s - loss: 0.4027 - categorical_accuracy: 0.8281 960/9600 [==>...........................] - ETA: 0s - loss: 0.3936 - categorical_accuracy: 0.8354 1216/9600 [==>...........................] - ETA: 0s - loss: 0.4002 - categorical_accuracy: 0.8339 1504/9600 [===>..........................] - ETA: 0s - loss: 0.3924 - categorical_accuracy: 0.8371 1792/9600 [====>.........................] - ETA: 0s - loss: 0.3883 - categorical_accuracy: 0.8365 2112/9600 [=====>........................] - ETA: 0s - loss: 0.3901 - categorical_accuracy: 0.8333 2432/9600 [======>.......................] - ETA: 0s - loss: 0.3899 - categorical_accuracy: 0.8310 2784/9600 [=======>......................] - ETA: 0s - loss: 0.3961 - categorical_accuracy: 0.8297 3136/9600 [========>.....................] - ETA: 0s - loss: 0.3974 - categorical_accuracy: 0.8268 3520/9600 [==========>...................] - ETA: 0s - loss: 0.4020 - categorical_accuracy: 0.8239 3904/9600 [===========>..................] - ETA: 0s - loss: 0.3973 - categorical_accuracy: 0.8261 4288/9600 [============>.................] - ETA: 0s - loss: 0.3958 - categorical_accuracy: 0.8263 4672/9600 [=============>................] - ETA: 0s - loss: 0.3942 - categorical_accuracy: 0.8277 5056/9600 [==============>...............] - ETA: 0s - loss: 0.3927 - categorical_accuracy: 0.8291 5440/9600 [================>.............] - ETA: 0s - loss: 0.3904 - categorical_accuracy: 0.8314 5824/9600 [=================>............] - ETA: 0s - loss: 0.3889 - categorical_accuracy: 0.8312 6208/9600 [==================>...........] - ETA: 0s - loss: 0.3881 - categorical_accuracy: 0.8297 6592/9600 [===================>..........] - ETA: 0s - loss: 0.3904 - categorical_accuracy: 0.8284 6976/9600 [====================>.........] - ETA: 0s - loss: 0.3888 - categorical_accuracy: 0.8296 7360/9600 [======================>.......] - ETA: 0s - loss: 0.3877 - categorical_accuracy: 0.8292 7744/9600 [=======================>......] - ETA: 0s - loss: 0.3880 - categorical_accuracy: 0.8304 8128/9600 [========================>.....] - ETA: 0s - loss: 0.3893 - categorical_accuracy: 0.8298 8512/9600 [=========================>....] - ETA: 0s - loss: 0.3884 - categorical_accuracy: 0.8306 8896/9600 [==========================>...] - ETA: 0s - loss: 0.3889 - categorical_accuracy: 0.8291 9280/9600 [============================>.] - ETA: 0s - loss: 0.3886 - categorical_accuracy: 0.8292Epoch 00007: val_loss improved from 0.34590 to 0.34174, saving model to dataset_pymva/weights/TrainedModel_PyKeras.h5 9600/9600 [==============================] - 0s - loss: 0.3882 - categorical_accuracy: 0.8297 - val_loss: 0.3417 - val_categorical_accuracy: 0.8483 Epoch 9/10 32/9600 [..............................] - ETA: 0s - loss: 0.3155 - categorical_accuracy: 0.8750 352/9600 [>.............................] - ETA: 0s - loss: 0.4004 - categorical_accuracy: 0.8210 672/9600 [=>............................] - ETA: 0s - loss: 0.3756 - categorical_accuracy: 0.8304 1024/9600 [==>...........................] - ETA: 0s - loss: 0.3895 - categorical_accuracy: 0.8252 1376/9600 [===>..........................] - ETA: 0s - loss: 0.3927 - categorical_accuracy: 0.8241 1760/9600 [====>.........................] - ETA: 0s - loss: 0.3945 - categorical_accuracy: 0.8261 2144/9600 [=====>........................] - ETA: 0s - loss: 0.3907 - categorical_accuracy: 0.8251 2528/9600 [======>.......................] - ETA: 0s - loss: 0.3898 - categorical_accuracy: 0.8295 2912/9600 [========>.....................] - ETA: 0s - loss: 0.3824 - categorical_accuracy: 0.8345 3296/9600 [=========>....................] - ETA: 0s - loss: 0.3890 - categorical_accuracy: 0.8280 3680/9600 [==========>...................] - ETA: 0s - loss: 0.3898 - categorical_accuracy: 0.8285 4064/9600 [===========>..................] - ETA: 0s - loss: 0.3866 - categorical_accuracy: 0.8310 4448/9600 [============>.................] - ETA: 0s - loss: 0.3925 - categorical_accuracy: 0.8298 4832/9600 [==============>...............] - ETA: 0s - loss: 0.3945 - categorical_accuracy: 0.8295 5216/9600 [===============>..............] - ETA: 0s - loss: 0.3929 - categorical_accuracy: 0.8282 5600/9600 [================>.............] - ETA: 0s - loss: 0.3925 - categorical_accuracy: 0.8289 5984/9600 [=================>............] - ETA: 0s - loss: 0.3904 - categorical_accuracy: 0.8295 6368/9600 [==================>...........] - ETA: 0s - loss: 0.3908 - categorical_accuracy: 0.8284 6752/9600 [====================>.........] - ETA: 0s - loss: 0.3879 - categorical_accuracy: 0.8303 7136/9600 [=====================>........] - ETA: 0s - loss: 0.3875 - categorical_accuracy: 0.8297 7520/9600 [======================>.......] - ETA: 0s - loss: 0.3889 - categorical_accuracy: 0.8285 7904/9600 [=======================>......] - ETA: 0s - loss: 0.3937 - categorical_accuracy: 0.8263 8288/9600 [========================>.....] - ETA: 0s - loss: 0.3909 - categorical_accuracy: 0.8277 8672/9600 [==========================>...] - ETA: 0s - loss: 0.3880 - categorical_accuracy: 0.8284 9056/9600 [===========================>..] - ETA: 0s - loss: 0.3889 - categorical_accuracy: 0.8272 9440/9600 [============================>.] - ETA: 0s - loss: 0.3905 - categorical_accuracy: 0.8263Epoch 00008: val_loss did not improve 9600/9600 [==============================] - 0s - loss: 0.3903 - categorical_accuracy: 0.8261 - val_loss: 0.3448 - val_categorical_accuracy: 0.8517 Epoch 10/10 32/9600 [..............................] - ETA: 0s - loss: 0.5084 - categorical_accuracy: 0.6875 416/9600 [>.............................] - ETA: 0s - loss: 0.3745 - categorical_accuracy: 0.8173 800/9600 [=>............................] - ETA: 0s - loss: 0.3751 - categorical_accuracy: 0.8287 1184/9600 [==>...........................] - ETA: 0s - loss: 0.3880 - categorical_accuracy: 0.8252 1568/9600 [===>..........................] - ETA: 0s - loss: 0.3825 - categorical_accuracy: 0.8304 1952/9600 [=====>........................] - ETA: 0s - loss: 0.3879 - categorical_accuracy: 0.8243 2336/9600 [======>.......................] - ETA: 0s - loss: 0.3863 - categorical_accuracy: 0.8292 2720/9600 [=======>......................] - ETA: 0s - loss: 0.3879 - categorical_accuracy: 0.8261 3104/9600 [========>.....................] - ETA: 0s - loss: 0.3894 - categorical_accuracy: 0.8231 3488/9600 [=========>....................] - ETA: 0s - loss: 0.3843 - categorical_accuracy: 0.8254 3872/9600 [===========>..................] - ETA: 0s - loss: 0.3878 - categorical_accuracy: 0.8226 4256/9600 [============>.................] - ETA: 0s - loss: 0.3852 - categorical_accuracy: 0.8233 4640/9600 [=============>................] - ETA: 0s - loss: 0.3872 - categorical_accuracy: 0.8213 5024/9600 [==============>...............] - ETA: 0s - loss: 0.3855 - categorical_accuracy: 0.8215 5408/9600 [===============>..............] - ETA: 0s - loss: 0.3912 - categorical_accuracy: 0.8195 5792/9600 [=================>............] - ETA: 0s - loss: 0.3905 - categorical_accuracy: 0.8204 6176/9600 [==================>...........] - ETA: 0s - loss: 0.3924 - categorical_accuracy: 0.8198 6560/9600 [===================>..........] - ETA: 0s - loss: 0.3893 - categorical_accuracy: 0.8230 6944/9600 [====================>.........] - ETA: 0s - loss: 0.3859 - categorical_accuracy: 0.8256 7328/9600 [=====================>........] - ETA: 0s - loss: 0.3833 - categorical_accuracy: 0.8278 7712/9600 [=======================>......] - ETA: 0s - loss: 0.3829 - categorical_accuracy: 0.8277 8096/9600 [========================>.....] - ETA: 0s - loss: 0.3838 - categorical_accuracy: 0.8270 8480/9600 [=========================>....] - ETA: 0s - loss: 0.3838 - categorical_accuracy: 0.8268 8864/9600 [==========================>...] - ETA: 0s - loss: 0.3859 - categorical_accuracy: 0.8259 9248/9600 [===========================>..] - ETA: 0s - loss: 0.3858 - categorical_accuracy: 0.8269Epoch 00009: val_loss did not improve 9600/9600 [==============================] - 0s - loss: 0.3846 - categorical_accuracy: 0.8276 - val_loss: 0.3449 - val_categorical_accuracy: 0.8462 : Elapsed time for training with 9600 events: 13.3 sec : Creating xml weight file: dataset_pymva/weights/TMVAClassification_PyKeras.weights.xml : Creating standalone class: dataset_pymva/weights/TMVAClassification_PyKeras.class.C Factory : Training finished : Factory : Train method: GTB for Classification : : : ================================================================ : H e l p f o r M V A m e t h o d [ GTB ] : : : --- Short description: : : Decision Trees and Rule-Based Models : : --- Performance optimisation: : : : --- Performance tuning via configuration options: : : <None> : : <Suppress this message by specifying "!H" in the booking option> : ================================================================ : ('deviance', 0.1, 100, 1.0, 2, 1, 0.0, 3, None, None, None, 0, None, 0) GradientBoostingClassifier(init=None, learning_rate=0.1, loss='deviance', max_depth=3, max_features=None, max_leaf_nodes=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, presort='auto', random_state=None, subsample=1.0, verbose=0, warm_start=0) : : --- Saving State File In:dataset_pymva/weights/PyGTBModel.PyData : : Elapsed time for training with 9600 events: 1.11 sec : Dataset[dataset_pymva] : Evaluation of GTB on training sample (9600 events) : Dataset[dataset_pymva] : Elapsed time for evaluation of 9600 events: 0.0345 sec : Creating xml weight file: dataset_pymva/weights/TMVAClassification_GTB.weights.xml : Creating standalone class: dataset_pymva/weights/TMVAClassification_GTB.class.C Factory : Training finished : : Ranking input variables (method specific)... : No variable ranking supplied by classifier: PyKeras : No variable ranking supplied by classifier: GTB Factory : === Destroy and recreate all methods via weight files for testing === :
/cvmfs/sft-nightlies.cern.ch/lcg/views/dev3/Sat/x86_64-slc6-gcc49-opt/lib/python2.7/site-packages/ipykernel/__main__.py:1: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. if __name__ == '__main__': /cvmfs/sft-nightlies.cern.ch/lcg/views/dev3/Sat/x86_64-slc6-gcc49-opt/lib/python2.7/site-packages/ipykernel/__main__.py:1: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. if __name__ == '__main__':
In [9]:
factory.TestAllMethods()
Factory : Test all methods Factory : Test method: PyKeras for Classification performance : : Load model from file: dataset_pymva/weights/TrainedModel_PyKeras.h5 Factory : Test method: GTB for Classification performance : : : --- Loading State File From:dataset_pymva/weights/PyGTBModel.PyData : : Dataset[dataset_pymva] : Evaluation of GTB on testing sample (2400 events) : Dataset[dataset_pymva] : Elapsed time for evaluation of 2400 events: 0.00952 sec
/cvmfs/sft-nightlies.cern.ch/lcg/views/dev3/Sat/x86_64-slc6-gcc49-opt/lib/python2.7/site-packages/ipykernel/__main__.py:1: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. if __name__ == '__main__': /cvmfs/sft-nightlies.cern.ch/lcg/views/dev3/Sat/x86_64-slc6-gcc49-opt/lib/python2.7/site-packages/ipykernel/__main__.py:1: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. if __name__ == '__main__':
In [10]:
factory.EvaluateAllMethods()
Factory : Evaluate all methods Factory : Evaluate classifier: PyKeras : TFHandler_PyKeras : Variable Mean RMS [ Min Max ] : ----------------------------------------------------------- : var1: -0.019674 1.0126 [ -2.8208 5.7307 ] : var2: -0.025370 0.99752 [ -3.1672 5.7307 ] : var3: -0.025914 1.0079 [ -3.0141 5.7307 ] : var4: -0.023154 1.0059 [ -2.9557 5.7307 ] : ----------------------------------------------------------- PyKeras : [dataset_pymva] : Loop over test events and fill histograms with classifier response... : TFHandler_PyKeras : Variable Mean RMS [ Min Max ] : ----------------------------------------------------------- : var1: -0.019674 1.0126 [ -2.8208 5.7307 ] : var2: -0.025370 0.99752 [ -3.1672 5.7307 ] : var3: -0.025914 1.0079 [ -3.0141 5.7307 ] : var4: -0.023154 1.0059 [ -2.9557 5.7307 ] : ----------------------------------------------------------- Factory : Evaluate classifier: GTB : GTB : [dataset_pymva] : Loop over test events and fill histograms with classifier response... : TFHandler_GTB : Variable Mean RMS [ Min Max ] : ----------------------------------------------------------- : var1: -0.019646 1.6797 [ -4.8163 4.5708 ] : var2: -0.028834 1.5789 [ -5.2407 4.4671 ] : var3: -0.036699 1.7446 [ -5.2331 4.6430 ] : var4: 0.11995 2.1669 [ -6.3160 4.8976 ] : ----------------------------------------------------------- : : Evaluation results ranked by best signal efficiency and purity (area) : ------------------------------------------------------------------------------------------------------------------- : DataSet MVA : Name: Method: ROC-integ : dataset_pymva PyKeras : 0.928 : dataset_pymva GTB : 0.918 : ------------------------------------------------------------------------------------------------------------------- : : Testing efficiency compared to training efficiency (overtraining check) : ------------------------------------------------------------------------------------------------------------------- : DataSet MVA Signal efficiency: from test sample (from training sample) : Name: Method: @B=0.01 @B=0.10 @B=0.30 : ------------------------------------------------------------------------------------------------------------------- : dataset_pymva PyKeras : 0.357 (0.335) 0.737 (0.780) 0.963 (0.957) : dataset_pymva GTB : 0.295 (0.395) 0.733 (0.788) 0.947 (0.948) : ------------------------------------------------------------------------------------------------------------------- : Dataset:dataset_pymva : Created tree 'TestTree' with 2400 events : Dataset:dataset_pymva : Created tree 'TrainTree' with 9600 events : Factory : Thank you for using TMVA! : For citation information, please visit: http://tmva.sf.net/citeTMVA.html
/cvmfs/sft-nightlies.cern.ch/lcg/views/dev3/Sat/x86_64-slc6-gcc49-opt/lib/python2.7/site-packages/ipykernel/__main__.py:1: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. if __name__ == '__main__': /cvmfs/sft-nightlies.cern.ch/lcg/views/dev3/Sat/x86_64-slc6-gcc49-opt/lib/python2.7/site-packages/ipykernel/__main__.py:1: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. if __name__ == '__main__':
Print ROC¶
In [11]:
# Enable Javascript for ROOT so that we can draw the canvas
%jsroot on
# Print ROC
canvas = factory.GetROCCurve(dataloader)
canvas.Draw()