- 210
- 1 376 459
Ahlad Kumar
India
Приєднався 30 лип 2006
Hello Everyone!
I am Dr. Ahlad Kumar. My area of specialization is in Image processing and Deep learning. I would like to share my knowledge in simple language for easy understanding via UA-cam medium.
If you like the content, you can support my channel
Google Pay : kumarahlad@okicici
Hope to see you on my channel...!!
I am Dr. Ahlad Kumar. My area of specialization is in Image processing and Deep learning. I would like to share my knowledge in simple language for easy understanding via UA-cam medium.
If you like the content, you can support my channel
Google Pay : kumarahlad@okicici
Hope to see you on my channel...!!
Natural Language Processing 3: Text Pre-processing : Tokenization
#nlp#word2vec#tokenization
Переглядів: 841
Відео
Lecture 15: Support Vector Machine 3 (Soft SVM and Kernel SVM)
Переглядів 8942 роки тому
Lecture 15: Support Vector Machine 3 (Soft SVM and Kernel SVM)
Lecture 13: Support Vector Machine (SVM) 1
Переглядів 9742 роки тому
Lecture 13: Support Vector Machine (SVM) 1
ISRO Sponsored Generative Adversarial Network (GAN) Seminar
Переглядів 5262 роки тому
Indian Space Research Organisation (www.isro.gov.in/) sponsored seminar at DAIICT (www.daiict.ac.in/)
ISRO Sponsored Natural Language Processing (NLP) Seminar
Переглядів 3382 роки тому
Indian Space Research Organisation (www.isro.gov.in/) sponsored seminar at DAIICT (www.daiict.ac.in/)
Lecture 9: Proximal Gradient Descent
Переглядів 1,1 тис.2 роки тому
Lecture 9: Proximal Gradient Descent
Lecture 7: Role of Regularization in Regression
Переглядів 7002 роки тому
Lecture 7: Role of Regularization in Regression
Lecture 5 : Non Linear Regression and Kernel Trick
Переглядів 1,1 тис.2 роки тому
Lecture 5 : Non Linear Regression and Kernel Trick
Lecture 2 : Basics of Machine Learning
Переглядів 1,1 тис.2 роки тому
Lecture 2 : Basics of Machine Learning
Reinforcement Learning 7 : Markov Reward Process
Переглядів 5902 роки тому
Reinforcement Learning 7 : Markov Reward Process
Reinforcement Learning 6 : Markov Chain, Chapman Kolmogorov Equation and its Python Implementation
Переглядів 9952 роки тому
Reinforcement Learning 6 : Markov Chain, Chapman Kolmogorov Equation and its Python Implementation
Thanks Ahlad, I really liked your explanation of LSTM and GRU please continue to make videos on seq-2-seq models, Transformers, MoE
Sir how adjacency matrix can feed to machine learning
Sir, thanks for the video. I have further modified your code to enhance the accuracy
thanks ! great explanation
23:00
sir where are the links and code
@AhladKumar pls help
@ Ahlad Kumar I subscribed for premium content but your 7 hidden videos are not showing to me
23:21 How to know how many flattened layers should be there..? I mean, why 120 & 84 FCs..?
I join channel but i don't have access to all reinforcement learning videos
I join the channel with 599 but i am not able to access the content
By P(x), do u mean P(~x){i mean P(telta x)}? 4:25
Hat's off to you and the people behind this work. Absolutely amazing
can anybody explain me what is noise input to generator and why we use it
🤔
Thank you for sharing!
Hey everyone, please note that sir made a mistake at 28:00, mistake that he clarified in next video . The expectation terms will also include minG and maxD on RHS of equation
Please direct 1 or 2 more example for finding gm based on seeing the resistance in the Branches ..
In the last configuration you told, why CAN'T we take Rout as Rout of source degeneration resistance , or 2 stage cascode resistance
I think that in 0:48 you meant to write and say that the role of the generator is to "...can fool the *discriminator* " instead of the generator.
LSTM architecture is more complex than RNN architecture, but formulas for gradient look much easier than for RNN. I think your formulas are incorrect. For example: dL / dWi = .....dit/dwi, and you write that dit/dwi = ht-1, but ht-1 is also depends on wi, because ht-1 = Ot-1*tanh(ft-1 * ct-2 + it-1 * gt-1), where it-1 also pedends on on wi
A huge amount of work done by you. Thanks teacher!!!
Thank you very much for math explanation. It was very important for me!!!You have simplified it to the smallest detail, now i can implement it in my code! Thanks!!!
Thank you!!!
💯💯💯
God bless you brother
Could you share this code?
if the decoder generates based on the condition, what is the point of the Z? does it also mean conditions C dominate the influence on the decoder then?
this is gold.
Best graph neural network video
Link of the code?
just perfect explanation
Very nice
Hi Ahlad, your explaination is awesome. Thanks for posting the videos. It would be greate if you can keep posting new lectures about the most recent DL technologies, such as transformers and so on.
great
amazingly explained.....understood the notations so so easily. its easy to read a research paper related to GAN comparitively better.
sir any video which explains latent space easily?
sir wht do u mean by l2 nor , l1nor etc
NN not enough for images because: 1. every pixel has some correlation with neighbouring pixels but 1D representation takes pixels row wise and so the model can't properly learn this correlation 2. too many parameters to optimize cause every image has too many pixels 3. translation invariance: if you move the object in the image say in to the right, NN will think they're different
very interesting but boring...
Sir, kindly guide what we have to do in order to test discriminator model of our GAN.
Thank you for a great explanation. Is there a book covering this material
💖💖
💖💖
💖💖
exeellent lecture series. do you mind sharing the Google colab? Thanks
You are great
perfect explaination
@Ahlad Kumar Sir, Thank You for making these amazing lectures. It is the best lecture and detailed lecture on AutoEncoder and its series. Thank You once again. _/\_