Read My Poker Face
Can't read my, Can't read my No he can't read my poker face (she's got me like nobody) Can't read my Can't read my No he can't read my poker face (she's got me like nobody) P-p-p-poker face, p-p-poker face (Mum mum mum mah) P-p-p-poker face, p-p-poker face (Mum mum mum mah) I wanna roll with him a hard pair we will be.
After spending some time watching some youtube tutorials and reading some articles I have gather some information concerning Convolutional Networks and applied it to my situation:
Lyrics Can't Read My Poker Face
Pokerface is a Group Video Chat Poker Game that lets you play with your friends and meet new ones. Join Pokerface, see your friends, and chat with them live! Have a real poker night from anywhere in the world! Download Poker Face Now, Invite your friends, and get up to 1,000,000 welcome bonus chips. Featured on CNET.com, Geekdad.com, MSN.com, Yahoo News, and many more! He Can’t Read My Poker Face – Life Lessons from Playing Poker with the Boys. By Debbie Miller July 22, 2019. Written by Debbie Miller July 22, 2019. A good poker face is hardest for Feeling and Turbulent types to achieve. For these personalities, expressing their feelings – verbally or nonverbally – is a natural way of processing them. If this describes you, try not to be hard on yourself. There are worse things than allowing someone to see that you’re upset.
- Regarding the batch size and learning rate I tried different values, a size of 32 with a learning rate of 0.0001 gave me the best results for the reduced data model.
- Adding a batch normalization layer in the CNN layer before the activation function prevents the values to spread from each other and this helps improving the performance and speed of the training process.
- A dropout layer before the fully connected layers. A dropout layer helps preventing overfitting which, since I am working now with a reduced data set, is easy to happen. Also Andrew Ng in one of his videos, explain that in neural networks that work with images, a dropout layer is used almost by default. It is a common practice.
Results for the model trained with the reduced dataset :
(ignore y_pred column, % is computed dividing right/ones)
To try and check all these improvements I used the reduced data for timing reasons. I improved about 4% the accuracy on the public test data set regarding to the very first model I trained. No bad. We can see that almost all classes have a better prediction with exception of class 6:Neutral which got 10% worse and class 1:Disgust which remained unpredicted.
The improvement is not amazing, but it is something. My guess is that two big issues are happening in here:
- Size of the training data is small.
- The model is too simple to learn such a complicated thing as facial emotion.
Before I move to a more complex CNN I will try these little improvement in on my model trained with the entire data set and see if they deliver a similar improvement.
Results for the model trained with the entire dataset:
(ignore y_pred column, % is computed dividing right/ones)
No He Can't Read My Poker Face Lyrics
So good news! The changes I added on the reduced data set model improved the model trained with the entire dataset in a similar way (about 4%) . Again in this situation all classes had an improvement in their prediction (yesss! class 1:Disgust is not zero anymore :)) but class 6:Neutral lost about 10%.
He Can't Read My Poker Face Figurative Language
Those are mi little improvements for now. I gonna get a well deserved coffee.