to upgrade
Ideas from books, articles & podcasts.
Max Pooling Layers : These layers reduce the feature map (the size of the features), reduce the computational resources needed and prevents overfitting
Fully Connected Layers : They appear at the end of the CNN, they’re just linear layers which takes the results of the CNN as inputs and outputs the lable
That’s the architecture, but the really amazing part of a CNN are it’s kernels inside it’s convolutional layers! These kernels are only matrixes (basically a bunch of numbers sorted in a box), but they’re able to extract tons of features about the image.
STASHED IN:
5
MORE IDEAS FROM THE SAME ARTICLE
These new Quantum Machine Learning algorithms are but a testament to what there is to come. Even though quantum computers are at it’s infancy, we have already seen these new QML algorithms which are already outperforming our old ones!
By extracting those features you can put them in a neural network and classify your image! But as cool as that sounds, there are 2 Achilles heels:
1.Convolutional layers : Instead of kernals, you have gates that are applied to the qubits adjacent to it
To tackle the first problem, we could just let qubits represent the quantum system! Introducing: Quantum Convolutional Neural Networks .
opposite by exponentially decreasing the number of qubits.
CNNs are actually able to achieve pretty insane results — 99.75% accuracy ! The reason why CNNs’ incredible power is due to it’s ability to look at the surrounding pixels and based off of that, extract f...
Let’s do a quick run through of how a machine could see. First let’s take the MNIST Dataset, a dataset of digits from 0 to 9:
The ability for an organ to take photons from the outsight world, focus them, and then convert them into electrical signals is pure awesomeness! But what’s even more awesome is the organ behind your eyeballs — the brain!
A Quanvolutional Neural Network (QNN) is basically a CNN but with quanvolutional layers (much like how CNN’s have convolutioanl layers). A Quanvolutional layer acts and behaves just like a convolutional layer!
By the first layer the kernels can start telling which images have verticle lines, horizontal lines and different colors. By layer 2 you can put those features together and form more comple shapes like corners or circles.
3 Reactions
Comment
created 16 ideas
6
Comment
133 reads
❤️ Brainstash Inc.