In part four of Machine Learning Zero to Hero, AI Advocate Laurence Moroney (lmoroney@) discusses the build of an image classifier for rock, paper, and scissors. In episode one, we showed a scenario of rock, paper, and scissors; and discussed how difficult it might be to write code to detect and classify these. As the episodes have progressed into machine learning, we’ve learned how to build neural networks from detecting patterns in raw pixels, to classifying them, to detecting features using convolutions. In this episode, we have put all the information from the first three parts of the series into one.
Links:
Colab notebook →http://bit.ly/2lXXdw5
Rock, paper, scissors dataset → http://bit.ly/2kbV92O
This video is also subtitled in Chinese, Indonesian, Italian, Japanese, Korean, Portuguese, and Spanish.
Watch more Coding TensorFlow → http://bit.ly/2lytA4j
Subscribe to the TensorFlow channel → http://bit.ly/2ZtOqA3
Great work Sir
Artificial intelligence, the "dream" of a billion…
Thanks
Please keep going that awesome uploads, so helpful for us !! thank you 🙂 !!
Thanks for spreading the knowledge 😊👍
Thanks for the short video. You might have covered this in the other videos (parts 1 through 3), but what guidelines can you provide for network architecture? In other words, I believe you used 4 conv2d layers in this example. Why 4 layers vs. 6 layers? Just looking to get better at this facet of modeling. Thanks again for the tips/tricks.
very nice. thank you 🙂
I am gonna implement this for the Augmented reality application. thank you.
Muito bom, bora testar!
Whats Next ?
How do we find the problem required deep NN not 1 hidden NN?
don't finish it.
One query regarding the CNN codes, in the video we have first and second layers with 64 filters each.
So when I pass an single image through the first layer, we get 64 outputs.
Then do we pass those each of the 64 outputs to the 2nd layer having 64 filters and .
So the total number of outputs from the second layer is 64×64 = 4096, means for once single image we found out 4096 features by the end of 2nd CNN layer ?
Thanks for the visited Mr.lawrence.kindly help me to sort out this issue.
please add more videos
You guys are doing awesome work for the humanity, We love you. Keep making these kinds of videos.
If there is any video or can you make any video on Neural network with full explanation of basics like convolution, Kernal, padding, strides, channels, max pooling…
Thanks for this episode
wow.
Please make a video on implement ML model from script to deployment. Small discription is also enough.
It's really helpful for us if you provide full deployment model of ML to production level. Laurence moroney thank you for this video 😄
Thank you Laurence, wonderfully high-quality training. I have the perfect real-world problem for this in my business.
Is this possible on the raspberry pi 4B model?
Sir is it possible to train InceptionV3 with my classes and get output as my classes and previously trained classe together? if I have 5 classes and InceptionV3 has 100 classes then I want my output as 105 classes
Thank you. Where do I find the videos for Tensorflow 2.0.? Hope more videos to come, with advanced networks like GANs, Reinforcement learning or putting this image recognition model on a cellphone.
Excellent series, really well presented, thank you for the tuition Laurence.
This is awesome! I hope to attend the upcoming O'Reilly Tensorflow World Conference and surround myself with great people! 😁👍
Great stuff! Though, I missed the final step which is to convert the trained algorithm into TF lite so we can use it in a mobile app 🙂
How can I enclose what the model said in a rectangle? Thank you..
Awesome explanation sir. I was struggling to start with DL, i got my path by these videos thanks a lot… And when can we expect NLP session in python.
Could you make a video on how to segment an image? That is, the environment is removed and only the outline of an animal or object remains. Thank you…
Nice series! But it only touches the surface of deep learning, I hope there are more more in depth tutorials later.
Good introduction Laurence. Thanks
It would be great if you keep posting those videos or even better create a series for more advanced. I loved those and learned a lot from the notebooks linked. Thank you!
Life is so much better with simple explanations. Thank you.
Amazing Explanation !! Thank You so much Laurence Moroney!
great tutorials! i wish you'd do way more episodes, maybe perhaps in a longer format
Great tutorials! Quick Question: Can I do this with tensorflow 1.14
I want to take a specialization on tensorflow course on coursera, I have basic understanding of machine learning. Should I start with that course? or I should start with andrew ng's machine learning? please guide me.
Looking forward to more tutorials Laurence !!!
I have a question that is bugging for the past couple of weeks. How do I work with TFRecords data. I`m creating a dataset from within Earth Engine and exporting it as a TFRecord, images on a 256×256 format and I`m trying to create a classifier by feeding it to my neural net but I`m really confused on how to use the data that I exported on TFRecord format. If anyone can give me any explanation on how to use it, I`d appreciate it so much. Thx!
The best tf2.0 course ever! Super great job. Thanks Laurence
I'm less than a week into Python after a year on JavaScript/Ember.js. Learned JS first because it was closest to HTML, CSS, etc. During that time I struggled mightily because I was always attempting to read technical papers about BERT, neural networks, etc. Became quite overwhelmed thinking I'd never be able to learn all the complex maths needed to perform the text analysis I've always dreamed of. Little did I know there were so many brilliant people who've already done the heavy lifting. I just need to learn how to call the libraries. Thank you for making these concepts so brilliantly accessible! I get it!!
Can I do this without tensorflow
I don't see the image augmentation in Colab notebook →http://bit.ly/2lXXdw5, can anybody tell me, thanks
i am not able to download those files by that code
This is really excellent. Thanks very much, Laurence.
Thank you for this wonderful content but how do i learn more?????
Nice sweet and small Playlist, here I have to know about that the does Tensorflow have any Shape classification dataset, not handwritten drawings but actual images like circles, triangles and so on…. or else help with how to create the custom dataset.
Sir, can you please make me understand the significance of the last element i.e. 3 in the input_shape tuple. You may suggest more videos or a notebook to understand those stuff in more detail.
And thanks for the short series containing a huge amount of information.
Hi Prof. Laurence, you are a professor the way you teach… requesting you to create a series on application of deep learning within NLP. An intensive one.. thank you so much sir for such easy to follow and understand videos you created. God bless you.
yes, good work … more tutorials , maybe training a neural network to generate art or music 🧠🤖
Hi, thanks a lot for this understanding of NN . Could you please guide me how to train a model for images of persons and recognize the faces and match them with the existing db. As also, if possible also detect the emotions of humans in videos. Would be very helpful to me please. Thanks a lot again!
What if I don't want to use your dataset, how do I load my own?
Can anyone tell me where's the Jupyter Notebook of this video? Can't find it!
Thanks very much. You are a Rose.
only 4 parts for ML? is this series a whole of ML?
Hey Laurence, the code you provided has been training for over 4 hours now, and is still at epoch 1. Why is that?
So are these updated for Anaconda python with pilow vs pil on python3?, these tutorials are super helpful to get going in this subject. Thanks for the series.
First, let me appreciate the intellect of the presenter. This is marvelous. second, I can't belief this topic, Machine Learning can be this simplified. My idea of Neural Network initially was a very complex subject that can't be understood. Now, I assimilated every bit of this. Thank you.
This was a wonderful series.It is just that i am trying to run this on my jupyter notebook, and i am using my personal dataset of hand gestures like right_click, palm, left_click. My directory looks like this
Dataset–>right_click–>seq_01= images, and so on like this but when i run the exact code you mentioned. I get an error on the last line i.e history = model.fit_generator(train_generator). The error is as follows
InvalidArgumentError: logits and labels must be broadcastable: logits_size=[32,3] labels_size=[32,2]
[[node categorical_crossentropy/softmax_cross_entropy_with_logits (defined at <ipython-input-3-ab06a749855a>:1) ]] [Op:__inference_train_function_1195]
Function call stack:
train_function
kindly help.
Ultimate solution to improve your CNN: gigantic training dateset
How to decide how many convolution layers to add and how many filters to place in each convolution layer?
inject this into my veins please
When I tried to use model.predict(img, batch_size=10) in which I inputted my own image, it returned : IndexError: list index out of range. It would be great if someone could help me out.
I cant't see the link.
Hey Laurance, I copied the code from the notebook to my own jupyter notebook, and there it takes about 5 minutes per epoch, wheras on the colab notebook it takes a couple of seconds. how can this be?
Thank you Laurence. I really enjoyed following along.
Where I will get demo code which converts text present in image into actual text ?
Thanks for sharing. I also wonder where the notebook is.
This was a lot of fun and very informative. Note that the classifier does not work with images that do not come from this dataset. I took several cell phone pictures of scissors, several hands, and scaled to 150 x 150. They are all classified as paper – [[1. 0. 0.]]. Thanks for the videos and the notebooks.
I would like to see more lessons, please, thank you Laurence Moroney
Great lecture, thank you! for those who are looking the image augmentation code, it is done by the ImageDataGenerator class
Sir, please help me to build image classification codes to classify in single scene of video into image and different kind of activities in particular scene
I’ve watched all 4 videos and i still have no clue how to do machine learning. Yes I understand the concepts more, but it shouldn’t be titled “zero to hero”
Thank you so much! I really appreciate the effort you took to make this series 🙂
Thanks a lot for the series. You are a nice educator…
Nooooooooooo…….!!!!!!!!!!!!!!!! WE WANT MORE TUTORIALS !!
Hi Laurence, Thanks for these wonderful videos. I had an observation Upon executing the code for exercise 8 for Fashion MNIST dataset, Observing the following error
TypeError: '>' not supported between instances of 'NoneType' and 'float'
I liked these tutorials! 😄
Using CNN, are there ways to identify things in a picture that's in various shapes/resolutions?
Amazing 👍🏻 thx for making it clear, simple and SHORT👏🏻
thank you Laurence!
ML Zero to Hero of Tensorflow is super Laurence
Please put these tutorials in a playlist
really great video! the content is so simplified and well-explained!
imagine the work and resources in trying to create a dataset. xD but its good to know we can make ai that can only see something 28×28. we are now in the era of 8bit AI
Explained superbly Laurence. Appreciate your efforts. Thanks.
Hi guys,
I'm trying example notebook and I get this error
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
MessageError: TypeError: google.colab._files is undefined
Some piece of advice?
Thanks
i am trying with a different data set where train data has 27 images each in Train/bag, Train/bat, Train/bathtub while validation/test data has 9 images each in Test/bag, Test/bat, Test/bathtub. I am getting below error
any suggestion what could be root cause of below error ->
InvalidArgumentError: logits and labels must be broadcastable: logits_size=[81,3] labels_size=[81,4]
[[node categorical_crossentropy/softmax_cross_entropy_with_logits (defined at <ipython-input-12-89ea0893532c>:61) ]] [Op:__inference_train_function_2287]
Function call stack:
train_function
on below line of code->
history = model.fit(train_generator, epochs=25, steps_per_epoch=20, validation_data = validation_generator, verbose = 1, validation_steps=3)
What does Dropout do ?
Thank you Sir
Laurence… A modern-day hero. Thank you !
Excellent series, very informative. Hope more series like this to come in future
How can I use my own dataset (images)? I mean from my local drive.
haven't ever seen a more amazing video !!
Mr. Narrator, you never explain the optimisers, why is that? Even the codelabs do the same. Nobody even says they will be covered later or links the documentation. Can we create our own optimizers? I need answers,… Everybody does, everybody should
Hi,
I attached a link for detecting objects in images and real time video tutorial
This tutorial is based on TensorFlow , Python and pre-trained models
The link for the video : https://youtu.be/40_NC2Ahs_8
I also shared the Python code in the video description .
Enjoy
Eran
Great video. Would be good to also have examples for e.g. having one additional file per image containing the labels in some arbitrary format and/ or having mixes of labels as categories and floats.
Thank you for your time and effort, I have learned because of you
If I had to create an Object Detection device using ML, would I have to re-train the machine everytime I switch said machine on?