Q-done

Pages

  • Q-done
  • games
  • health
  • ymilog
  • kukat
  • k999blog
  • feofan
  • Artificial Neuron
  • Room-Temperature Superconductivity
  • ddtor
  • zmt

Why Quantum Computers Won’t Replace Classical Computers Anytime Soon

https://www.forbes.com/sites/sap/2019/09/04/why-quantum-computers-wont-replace-classical-computers-anytime-soon/

Susan Galer Brand Contributor
SAPBRANDVOICE | Paid Program
Innovation

It’s easy to understand the allure of the super-processing powers of quantum computing when you consider the explosion of data from AI, machine learning and internet of things (IoT). IDC researchers predicted there will be over 300 billion connected things by 2021. Business models are being disrupted overnight. Workforce diversity and empowered customers are rising, while resources are getting scarce. The ability to manage and monetize large amounts of data is top of mind for leaders whose survival depends on connecting mountains of data ─ experience and operational within and beyond company walls ─ to make better, faster decisions. Quantum computing may become one way of putting all this data to work.
AI-fueled innovations like self-driving cars will eventually require the kind of miniature, invisible computing power that quantum computers promise to deliver.
AI-fueled innovations like self-driving cars will eventually require the kind of miniature, invisible computing power that quantum computers promise to deliver.
GETTY
“AI-fueled innovations like self-driving cars require tremendous amounts of computing power to safely navigate exceptional situations on roads every day. That kind of computing power will eventually be miniature and almost invisible because it has to fit inside our cars,” said Andrey Hoursanov, lead of quantum security at SAP. “The challenge with further miniaturization is that making transistors smaller will only work up to a point, after which quantum effects cannot be ignored.”
Industries that could benefit from quantum computing
Optimists think quantum computing will perform all tasks faster and smarter than classical machines. However, Hoursanov said that early experiments show how quantum could solve some problems faster, but not all. He mentioned transportation and finance among the industries that could benefit first from quantum computing’s advantages. Industries like pharmaceutical or battery manufacturing could benefit from quantum technologies even earlier, using quantum simulators before universal quantum computers become available.
“Route planning, supplier management, financial portfolio management, and customer satisfaction analysis are places where quantum’s unique ability to quickly find the optimal solution by analyzing huge amounts of heterogeneous data would work well,” said Hoursanov. “Classical computers get overwhelmed by endless calculations when it comes to these enormous amounts of data, or resort to approximations which might be just a little better than guess work.”
Transportation companies could map out the most cost-effective route using far more parameters than today. Procurement could more easily select the best suppliers for individualized demands. Quantum computing could help people with limited budgets distribute investments for the greatest returns. Manufacturing is another industry where quantum computers could help scale complex assembly, factoring in workers, equipment, raw materials, customer demand and anything else relevant to producing goods, while saving time and maximizing resources.
Proof of quantum computing progress
A sure-fire sign that quantum computing is moving beyond hype and experimentation will be when we see it repeatedly solving real-life problems better than classical computing. However, the technology barriers to this remain huge. One challenge is the inability to store data in a quantum state for reasonably long time periods. Right now, quantum data can be reliably stored on a microsecond timescale. We need to get to hours, at least.
“By nature, a quantum bit loses its quantum properties over time and as soon as it interacts with other matter. This is called decoherence and it leads to errors,” said Laure Le Bars. “When this happens with classical computers, we correct errors behind the scenes. But for quantum bits, error correction technologies are not that advanced.”
Le Bars added that it’s difficult to control many quantum particles at once. “You need something much more universal and reprogrammable to manage the amount of data for advanced technologies like AI and machine learning,” she said.
Another major obstacle is efficiently transferring data between classical and quantum computers. It’s time-consuming to convert classical computer data from places like social networks, the stock market, or internal company systems into the quantum state for processing. For now, companies would likely spend more time converting data than any benefits gained from quantum computing.
Developers will do the new math
Although quantum computers may not be right around the corner, classical software developers will need to prepare for a different future, long-term.
“Developers will need new skills to build software for quantum computing,” said Le Bars. “It will force programmers and technology architects to think differently about algorithmic problems than they have in the past. It’s not about more power. It’s a different approach and way of programming. Quantum computing could be good for certain problems, and developers need to begin exploring the possibilities.”
Le Bars said that SAP is working with other industry leaders to explore quantum technology, also beyond computing, including the company’s partnership in the Quantum Internet Alliance. While it’s difficult to predict the exact trajectory of quantum computing’s growth in these early days, the ongoing rise of big data means it will eventually impact future business.
No comments:

Artificial Neural Networks for Total Beginners



https://towardsdatascience.com/artificial-neural-networks-for-total-beginners-d8cd07abaae4

Easy and Clear Explanation of Neural Nets (with Pictures!)

Rich Stureborg

Rich Stureborg

Sep 4 · 14 min read
Machine Learning drives much of the technology we interact with nowadays, with applications in everything from search results on Google to ETA prediction on the road to tumor diagnosis. But despite its clear importance to our every-day life, most of us are left wondering how this stuff works. We might have heard the term “artificial neural network,” but what does that really mean? Is it a robot that thinks in the same way a human does? Is it a supercomputer owned by Apple? Or is it just a fancy math equation?
Machine Learning actually covers everything from simple decision trees (similar to the ones you made in your Intro to Business Management course) to Neural Networks, the complex algorithm that mimics the function of a brain. This article will dive into neural networks since they are what’s behind most of the very impressive machine learning these days.

First, an Illustrative Example

To understand what machine learning is, consider the task of trying to predict the height of a tree based on the soil content in the ground. Now, since this is machine learning we are talking about, let’s assume we can get some really good data on this task: thousands of soil samples from all over the world.


There are a lot of measurements you can make on soil contents. Things like moisture levels, iron levels, grain size, acidity, etc. They all have some effect on the health of a tree, and how tall it grows. So let’s say that we examine thousands of trees in the world (all of the same kind, of course) and collect both data about their soil contents as well as the trees’ heights. We have just created a perfect dataset for machine learning, with both features (the soil contents) as well as labels (the heights). Our goal is to predict the labels using the features.


That definitely seems like a daunting task. Even if there is a relationship between soil contents and tree height, it certainly seems impossible to be able to make accurate predictions, right? Well, machine learning isn’t always perfectly analogous to how our brains work, even if neural networks are modeled from brains. The important thing to remember is that these models aren’t making wild guesses as we humans might. Instead, they are coming up with exact equations that determine their predictions. Let’s start with simplifying the problem a bit first.
It’s quite easy to imagine that a single feature like moisture will have a significant effect on tree height. Too dry, and the tree won’t grow, but too moist and the roots may rot. We could make an equation based on this single measurement, but it wouldn’t be very accurate because there are many many more factors that go into the growth of a tree.


See how the hypothetical relationship above is not a great estimate? The line follows the general trends of the dots, but if that’s what you use to make your predictions on height you’ll be wrong most of the time. Consider the case where there is a perfect amount of moisture, but the soil is way too acidic. The tree won’t grow very well, but our model only considers moisture, so it will assume that it will. If we consider both measurements, however, we might get a more accurate prediction. That is, we would only say that the tree will be very tall when both the moisture and acidity are at good levels, but if one or both of them are at bad levels we may predict that the tree will be short.
So what if we consider more factors? We could look at the effect of moisture and acidity at the same time by combining the relationships into one equation.


Excellent. Now we have a more complex equation that describes the tree’s height, and it considers two features (measurements). Now we can combine even more features to make an even more complex equation. For the sake of clarity, I will call the final, combined equation our “model”. It models how the features affect height. Combining simple equations like this into a multi-dimensional model is pretty straight forward, and we can create a very complex model pretty fast. But for every tweak you can make on one of the simple equations (choosing a slightly different equation for the relationship between height and moisture), there are now thousands if not millions of more ‘models’ that we have to try, all slightly different from one another. One of these models might be great at modeling the relationship between soil content and height, but most are probably really bad at it.


This is where machine learning comes in. It will create a model composed of many simpler equations, and then test how well it works. Based on its error (that is, how wrong the predictions are) it then tweaks the simpler equations only slightly, and tests how well that one works. When it tweaks the simpler equations, it is simply altering one of the graphs in the image above to look slightly different. It may shift the graph to the right or up and down, or it could slightly elongate peaks or increase the size of the valleys. Through a process similar to evolution, it will arrive at the best — or at least a good — solution. In fact, that’s why it’s called “machine learning”. The machine learns the pattern on its own, without humans having to tell it even simple information like “moisture is good for trees”.
If you’re curious about how the machine learning model picks the next combination of equations, you should read further about model training. Specifically, the concepts to master are stochastic gradient descent and backpropagation.
Sidenote: If you ever studied the Fourier series at university, it is useful to think of them as an analogy for a neural network. In school, we learn that you can create complex waves like a square wave using a combination of simple sine waves. Well, we can also create a machine learning model from many simple equations in a similar fashion.

What are the Components of a Neural Network?

Neural networks are specifically designed based on the inner workings of biological brains. These models imitate the functions of interconnected neurons by passing input features through several layers of what are referred to as perceptrons (think ‘neurons’), each transforming the input using a set of functions. This section will explain the components of a perceptron, the smallest component of a neural network.

The structure of a perceptron

A perceptron (above) is typically made up of three main math operations: scalar multiplication, a summation, and then a transformation using a distinct equation called an activation function. Since a perceptron represents a single neuron in the brain, we can put together many perceptrons to represent a brain. That would be called a neural network, but more on that later.

Input

The inputs are simply the measures of our features. For a single soil sample, this would be an array of values for each measurement. For example, we may have an input of:


representing 58% moisture, 1.3mm grain size, and 11 micrograms iron per kg soil weight. These inputs are what will be modified by the perceptron.

Weights

Weights represent scalar multiplications. Their job is to assess the importance of each input, as well as directionality. For example, does more iron contribute a lot or a little to height? Does it make the tree taller or shorter? Getting these weights right is a very difficult task, and there are many different values to try.
Let’s say we tried values for all three weights at 0.1 increments on the range of -10 to 10. The weights that showed the best results were w0 = 0.2, w1 = 9.6, w3 = -0.9. Notice that these weights don’t have to add up to 100. The important thing is how large and in what direction they are compared to one another. If we then multiply these weights by the inputs we had from before, we get the following result:


These values will then be passed onto the next component of the perceptron, the transfer function.

Transfer Function

The transfer function is different from the other components in that it takes multiple inputs. The job of the transfer function is to combine multiple inputs into one output value so that the activation function can be applied. This is usually done with a simple summation of all the inputs to the transfer function.


On its own, this scalar value is supposed to represent some information about the soil content. This value has already factored in the importance of each measurement, using the weights. Now it is a single value that we can actually use. You can almost think of this as an arbitrary weighted index of the soil’s components. If we have a lot of these indexes, it might become easier to predict tree height using them. Before the value is sent out of the perceptron as the final output, however, it is transformed using an activation function.

Activation Function

An activation function will transform the number from the transfer function into a value that dramatizes the input. Often times, the activation function will be non-linear. If you haven’t taken linear algebra in university you might think that non-linear means that the function doesn’t look like a line, but it’s a bit more complicated than this. For now, just remember that introducing non-linearity to the perceptron helps avoid the output varying linearly with the inputs and therefore allows for greater complexity to the model. Below are two common activation functions.


ReLU is a simple function that compares zero with the input and picks the maximum. That means that any negative input comes out as zero, while positive inputs are unaffected. This is useful in situations where negative values don’t make much sense, or for removing linearity without having to do any heavy computations.


The sigmoid function does a good job of separating values into different thresholds. It is particularly useful for values such as z-scores, where values towards the mean (zero) need to be looked at carefully since a small change near the mean may significantly affect a specific behavior, but where values far from the mean probably indicate the same thing about the data. For example, if soil has lots and lots of moisture, a small addition to moisture probably won’t affect tree height, but it if has a very average level of moisture then removing some small amount of moisture could affect the tree height significantly. It emphasizes the difference in values if they are closer to zero.
When you think of activation functions, just remember that it’s a nonlinear function that makes the input more dramatic. That is, inputs closer to zero are typically affected more than inputs far away from zero. It basically forces values like 4 and 4.1 to be much closer, while values like 0 and 0.1 become more spread apart. The purpose of this is to allow us to pick more distinct decision boundaries. If, for example, we are trying to classify a tree as either “tall,” “medium,” or “short,” values of 5 or -5 are very obviously representing tall and short. But what about values like 1.5? Around these numbers, it may be more difficult to determine a decision boundary, so by dramatizing the input it may be easier to split the three categories.
We pick an activation function before training our model, so the function itself is always the same. It is not one of the parameters we toggle when testing thousands of different models. That only happens to the weights. The output of the ReLU activation function will be:


Bias

Up until now, I have ignored one element of the perceptron that is essential to its success. It is an additional input of 1. This input always stays the same, in every perceptron. It is multiplied by a weight just like the other inputs are, and its purpose is to allow the value before the activation function to be shifted up and down, independent of the inputs themselves. This allows the other weights (for the actual inputs, not the weight for the bias) to be more specific since they don’t have to also try to balance the total sum to be around 0.


To be more specific, bias might shift graphs like the left graph to something like the right graph:


And that’s it! We’ve now built a single perceptron. We’ve now created a model that imitates the brain’s neuron. We also understand that while that sounds fancy, it really just means that we can create complex multi-dimensional equations by altering a few weights. As you saw, the components are surprisingly simple. In fact, they can be summarized by the following equation:


From here on out I will be representing this equation (i.e. a single perceptron) with a green circle. All of the components we have seen so far: inputs, bias, weights, transfer function, and an activation function are all present in every single green circle. When an arrow points into this green circle, it represents an individual input node, and when the arrow points out of the green circle it represents the final output value.


Multi-Layer Perceptrons

To represent a network of perceptrons we simply plug the output of one into the input of another. We connect many of these perceptrons in chains, flowing from one end to another. This is called a Multi-Layer Perceptron (MLP), and as the name suggests there are multiple layers of interconnected perceptrons. For simplicity, we will look at a fully-connected MLPs, where every perceptron in one layer is connected to every perceptron in the next layer.
You might be wondering what a ‘layer’ is. A layer is just a row of perceptrons that are not connected to each other. Perceptrons in an MLP are connected to every perceptron in the layer before it and every perceptron in the layer after it, but not to any of the perceptrons within the same layer. Let’s look at an MLP with two input values, 2 hidden layers and an output of a single value. Let’s say the first hidden layer has two perceptrons and the second hidden layer has three.


The perceptrons here will all take in the inputs (arrows pointing towards the circle), perform the operations described in the previous section, and then push the output forward (arrow pointing out of the circle). This is done many times to create more and more complex equations, all considering the same information multiple times to make an accurate prediction. Now, although this article is meant to remove “the magic” from neural networks, it is very difficult to explain why this helps make more accurate predictions. In fact, the method I am describing is often referred to as a “black box” approach, because we don’t know why the equations it picks are important. It is currently an active area of research. What we can understand, however, is what the neural network is doing. That is as simple as following the weights through each and every perceptron.
The reason we call the layers between the input layer and output layers “hidden” is because once the values are fed from the input, it doesn’t serve us well to look at how that value is transformed until it exits the last output node. This is because these intermediary values are never used to evaluate the performance of our model (i.e. getting error values for predictions made on sample data).
And that’s really it. Combining many of these perceptrons helps us create even more sophisticated equations that a single perceptron can create.
The output value of an MLP like this is capable of making predictions on height using soil content measurements. Of course, picking the correct weights inside every single perceptron takes a lot of computational power, but this is exactly what a ‘neural network’ does.

Let’s see it in Action!

Here I will take two measurements from before through an entire neural network. The structure will be the same as the network I showed above. This will be very tedious, but you may follow along if you wish. I will be ignoring the bias for the sake of simplicity.
Here are the values of the two features I will use. They represent 58% moisture and 1.3mm grain size.


I will use the following (random) weights and activation functions for each perceptron. Recall that the ReLU activation function turns negative values into 0 and does not transform positive values:


So let’s get to it! The first two perceptrons both take the two inputs (blue), multiplies them by the associated weights (yellow), adds them (purple), and then applies the ReLU function (green):


These outputs become the inputs for each perceptron in the third layer. So every perceptron in the second hidden layer (there are three) will use 338.9 and 42 as inputs. Those perceptrons follow the following equations:


For the next layer, however, notice that we now have three, not two, inputs: 89.9, 16.22, and 0. All three inputs have to be included in the equation of the last perceptron, and therefore it will have three weights (in yellow below). Its equation is still as straightforward as the others.


As a summary, here are the values each perceptron produced given its inputs:


And there you have it! This neural network predicted a tree with a height of 165.72 feet! Now we have to compare the predicted results to the actual height of the sample tree in our data. Calculating some error value can be as straightforward as taking the difference between our predicted height and the actual height. Then we repeat this process with slightly different weights over and over until we find weights that predict tree height well for many samples. But that takes much too long for a human to do, so we need a machine to compute the optimal weights.
Important takeaways:
  • The weights were totally random to simulate the starting point of a neural network. This model is clearly not ‘trained’ and therefore won’t do well once we put another sample into it. We would use the results above to determine how to alter the weights.
  • The intermediary values don’t tell us much at all. For example, the output from the top node in the first hidden layer is 338.9, but that’s nowhere close to the value that the neural network predicted, 166ft. It’s important to not try to interpret the intermediary values as having a real-world meaning. This is why we call these layers ‘hidden.’

That’s it!

That’s how neural networks work. Make sure to hit the applause button if you enjoyed this explanation, and feel free to leave comments :)


Towards Data Science


Sharing concepts, ideas, and codes.


No comments:
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Translate

Search This Blog

Blog archive

  • ►  2021 (56)
    • ►  March (3)
      • ►  Mar 02 (2)
      • ►  Mar 01 (1)
    • ►  February (25)
      • ►  Feb 28 (1)
      • ►  Feb 26 (1)
      • ►  Feb 25 (1)
      • ►  Feb 24 (1)
      • ►  Feb 23 (1)
      • ►  Feb 22 (1)
      • ►  Feb 21 (1)
      • ►  Feb 19 (1)
      • ►  Feb 18 (1)
      • ►  Feb 17 (1)
      • ►  Feb 16 (1)
      • ►  Feb 15 (1)
      • ►  Feb 14 (2)
      • ►  Feb 12 (1)
      • ►  Feb 11 (1)
      • ►  Feb 10 (1)
      • ►  Feb 09 (1)
      • ►  Feb 08 (1)
      • ►  Feb 07 (1)
      • ►  Feb 05 (1)
      • ►  Feb 04 (1)
      • ►  Feb 03 (1)
      • ►  Feb 02 (1)
      • ►  Feb 01 (1)
    • ►  January (28)
      • ►  Jan 31 (1)
      • ►  Jan 29 (1)
      • ►  Jan 28 (1)
      • ►  Jan 27 (1)
      • ►  Jan 26 (1)
      • ►  Jan 25 (1)
      • ►  Jan 24 (1)
      • ►  Jan 22 (1)
      • ►  Jan 21 (1)
      • ►  Jan 20 (1)
      • ►  Jan 19 (1)
      • ►  Jan 18 (1)
      • ►  Jan 17 (1)
      • ►  Jan 15 (1)
      • ►  Jan 14 (2)
      • ►  Jan 13 (2)
      • ►  Jan 12 (1)
      • ►  Jan 11 (1)
      • ►  Jan 10 (2)
      • ►  Jan 08 (1)
      • ►  Jan 07 (1)
      • ►  Jan 06 (1)
      • ►  Jan 05 (1)
      • ►  Jan 04 (1)
      • ►  Jan 03 (1)
  • ►  2020 (596)
    • ►  December (27)
      • ►  Dec 31 (1)
      • ►  Dec 30 (1)
      • ►  Dec 29 (1)
      • ►  Dec 28 (1)
      • ►  Dec 27 (1)
      • ►  Dec 24 (1)
      • ►  Dec 23 (1)
      • ►  Dec 22 (1)
      • ►  Dec 21 (1)
      • ►  Dec 20 (1)
      • ►  Dec 18 (1)
      • ►  Dec 17 (1)
      • ►  Dec 16 (1)
      • ►  Dec 15 (1)
      • ►  Dec 14 (1)
      • ►  Dec 13 (1)
      • ►  Dec 11 (1)
      • ►  Dec 10 (1)
      • ►  Dec 09 (1)
      • ►  Dec 08 (1)
      • ►  Dec 07 (1)
      • ►  Dec 06 (2)
      • ►  Dec 04 (1)
      • ►  Dec 03 (1)
      • ►  Dec 02 (1)
      • ►  Dec 01 (1)
    • ►  November (28)
      • ►  Nov 30 (1)
      • ►  Nov 29 (1)
      • ►  Nov 27 (1)
      • ►  Nov 26 (1)
      • ►  Nov 25 (1)
      • ►  Nov 24 (1)
      • ►  Nov 23 (1)
      • ►  Nov 22 (1)
      • ►  Nov 20 (1)
      • ►  Nov 19 (1)
      • ►  Nov 18 (1)
      • ►  Nov 17 (1)
      • ►  Nov 16 (1)
      • ►  Nov 15 (1)
      • ►  Nov 13 (2)
      • ►  Nov 12 (1)
      • ►  Nov 11 (1)
      • ►  Nov 10 (1)
      • ►  Nov 09 (1)
      • ►  Nov 08 (1)
      • ►  Nov 06 (1)
      • ►  Nov 05 (2)
      • ►  Nov 04 (1)
      • ►  Nov 03 (1)
      • ►  Nov 02 (1)
      • ►  Nov 01 (1)
    • ►  October (29)
      • ►  Oct 30 (1)
      • ►  Oct 29 (1)
      • ►  Oct 28 (1)
      • ►  Oct 27 (1)
      • ►  Oct 26 (1)
      • ►  Oct 25 (1)
      • ►  Oct 23 (1)
      • ►  Oct 22 (1)
      • ►  Oct 21 (1)
      • ►  Oct 20 (1)
      • ►  Oct 19 (1)
      • ►  Oct 18 (1)
      • ►  Oct 16 (1)
      • ►  Oct 15 (1)
      • ►  Oct 14 (1)
      • ►  Oct 13 (1)
      • ►  Oct 12 (2)
      • ►  Oct 11 (1)
      • ►  Oct 09 (1)
      • ►  Oct 08 (1)
      • ►  Oct 07 (1)
      • ►  Oct 06 (1)
      • ►  Oct 05 (1)
      • ►  Oct 04 (2)
      • ►  Oct 02 (1)
      • ►  Oct 01 (2)
    • ►  September (34)
      • ►  Sep 30 (3)
      • ►  Sep 29 (1)
      • ►  Sep 28 (1)
      • ►  Sep 27 (1)
      • ►  Sep 25 (2)
      • ►  Sep 24 (1)
      • ►  Sep 23 (1)
      • ►  Sep 22 (1)
      • ►  Sep 21 (1)
      • ►  Sep 20 (1)
      • ►  Sep 18 (1)
      • ►  Sep 17 (2)
      • ►  Sep 16 (2)
      • ►  Sep 15 (2)
      • ►  Sep 14 (2)
      • ►  Sep 13 (1)
      • ►  Sep 11 (1)
      • ►  Sep 10 (2)
      • ►  Sep 09 (1)
      • ►  Sep 08 (1)
      • ►  Sep 07 (1)
      • ►  Sep 06 (1)
      • ►  Sep 04 (1)
      • ►  Sep 03 (1)
      • ►  Sep 02 (1)
      • ►  Sep 01 (1)
    • ►  August (31)
      • ►  Aug 31 (1)
      • ►  Aug 30 (1)
      • ►  Aug 28 (1)
      • ►  Aug 27 (1)
      • ►  Aug 26 (1)
      • ►  Aug 25 (1)
      • ►  Aug 24 (1)
      • ►  Aug 23 (1)
      • ►  Aug 21 (1)
      • ►  Aug 20 (1)
      • ►  Aug 19 (1)
      • ►  Aug 18 (1)
      • ►  Aug 17 (1)
      • ►  Aug 16 (1)
      • ►  Aug 14 (3)
      • ►  Aug 13 (1)
      • ►  Aug 12 (1)
      • ►  Aug 11 (1)
      • ►  Aug 10 (1)
      • ►  Aug 09 (1)
      • ►  Aug 07 (1)
      • ►  Aug 06 (2)
      • ►  Aug 05 (2)
      • ►  Aug 04 (2)
      • ►  Aug 03 (1)
      • ►  Aug 02 (1)
    • ►  July (46)
      • ►  Jul 31 (1)
      • ►  Jul 30 (1)
      • ►  Jul 29 (1)
      • ►  Jul 28 (1)
      • ►  Jul 27 (1)
      • ►  Jul 26 (1)
      • ►  Jul 24 (2)
      • ►  Jul 23 (1)
      • ►  Jul 22 (2)
      • ►  Jul 21 (1)
      • ►  Jul 20 (1)
      • ►  Jul 19 (1)
      • ►  Jul 17 (1)
      • ►  Jul 16 (1)
      • ►  Jul 15 (1)
      • ►  Jul 14 (2)
      • ►  Jul 13 (2)
      • ►  Jul 12 (1)
      • ►  Jul 10 (2)
      • ►  Jul 09 (3)
      • ►  Jul 08 (4)
      • ►  Jul 07 (3)
      • ►  Jul 06 (2)
      • ►  Jul 05 (1)
      • ►  Jul 03 (2)
      • ►  Jul 02 (3)
      • ►  Jul 01 (4)
    • ►  June (62)
      • ►  Jun 30 (3)
      • ►  Jun 29 (2)
      • ►  Jun 28 (1)
      • ►  Jun 26 (2)
      • ►  Jun 25 (3)
      • ►  Jun 24 (4)
      • ►  Jun 23 (3)
      • ►  Jun 22 (3)
      • ►  Jun 21 (2)
      • ►  Jun 19 (2)
      • ►  Jun 18 (3)
      • ►  Jun 17 (3)
      • ►  Jun 16 (3)
      • ►  Jun 15 (2)
      • ►  Jun 14 (1)
      • ►  Jun 12 (2)
      • ►  Jun 11 (4)
      • ►  Jun 10 (4)
      • ►  Jun 09 (2)
      • ►  Jun 08 (2)
      • ►  Jun 07 (1)
      • ►  Jun 05 (1)
      • ►  Jun 04 (1)
      • ►  Jun 03 (3)
      • ►  Jun 02 (3)
      • ►  Jun 01 (2)
    • ►  May (71)
      • ►  May 31 (1)
      • ►  May 29 (3)
      • ►  May 28 (4)
      • ►  May 27 (3)
      • ►  May 26 (3)
      • ►  May 25 (2)
      • ►  May 24 (1)
      • ►  May 22 (3)
      • ►  May 21 (3)
      • ►  May 20 (2)
      • ►  May 19 (3)
      • ►  May 18 (3)
      • ►  May 17 (1)
      • ►  May 15 (3)
      • ►  May 14 (5)
      • ►  May 13 (4)
      • ►  May 12 (4)
      • ►  May 11 (3)
      • ►  May 10 (2)
      • ►  May 08 (3)
      • ►  May 07 (4)
      • ►  May 06 (2)
      • ►  May 05 (3)
      • ►  May 04 (2)
      • ►  May 03 (1)
      • ►  May 01 (3)
    • ►  April (77)
      • ►  Apr 30 (3)
      • ►  Apr 29 (3)
      • ►  Apr 28 (3)
      • ►  Apr 27 (2)
      • ►  Apr 26 (2)
      • ►  Apr 24 (3)
      • ►  Apr 23 (4)
      • ►  Apr 22 (3)
      • ►  Apr 21 (3)
      • ►  Apr 20 (2)
      • ►  Apr 19 (1)
      • ►  Apr 17 (3)
      • ►  Apr 16 (5)
      • ►  Apr 15 (3)
      • ►  Apr 14 (5)
      • ►  Apr 13 (2)
      • ►  Apr 12 (1)
      • ►  Apr 10 (3)
      • ►  Apr 09 (4)
      • ►  Apr 08 (3)
      • ►  Apr 07 (3)
      • ►  Apr 06 (2)
      • ►  Apr 05 (2)
      • ►  Apr 03 (3)
      • ►  Apr 02 (5)
      • ►  Apr 01 (4)
    • ►  March (81)
      • ►  Mar 31 (4)
      • ►  Mar 30 (2)
      • ►  Mar 29 (2)
      • ►  Mar 28 (1)
      • ►  Mar 27 (3)
      • ►  Mar 26 (4)
      • ►  Mar 25 (3)
      • ►  Mar 24 (4)
      • ►  Mar 23 (2)
      • ►  Mar 22 (2)
      • ►  Mar 20 (4)
      • ►  Mar 19 (4)
      • ►  Mar 18 (5)
      • ►  Mar 17 (3)
      • ►  Mar 16 (2)
      • ►  Mar 15 (2)
      • ►  Mar 14 (1)
      • ►  Mar 13 (1)
      • ►  Mar 12 (1)
      • ►  Mar 11 (4)
      • ►  Mar 10 (5)
      • ►  Mar 09 (3)
      • ►  Mar 08 (1)
      • ►  Mar 06 (3)
      • ►  Mar 05 (4)
      • ►  Mar 04 (3)
      • ►  Mar 03 (4)
      • ►  Mar 02 (2)
      • ►  Mar 01 (2)
    • ►  February (62)
      • ►  Feb 28 (3)
      • ►  Feb 27 (3)
      • ►  Feb 26 (2)
      • ►  Feb 25 (4)
      • ►  Feb 24 (1)
      • ►  Feb 23 (1)
      • ►  Feb 22 (1)
      • ►  Feb 21 (4)
      • ►  Feb 20 (5)
      • ►  Feb 19 (2)
      • ►  Feb 18 (4)
      • ►  Feb 17 (3)
      • ►  Feb 16 (2)
      • ►  Feb 15 (1)
      • ►  Feb 14 (5)
      • ►  Feb 13 (2)
      • ►  Feb 12 (2)
      • ►  Feb 11 (2)
      • ►  Feb 10 (2)
      • ►  Feb 09 (1)
      • ►  Feb 07 (3)
      • ►  Feb 06 (2)
      • ►  Feb 05 (2)
      • ►  Feb 04 (2)
      • ►  Feb 03 (1)
      • ►  Feb 02 (1)
      • ►  Feb 01 (1)
    • ►  January (48)
      • ►  Jan 31 (2)
      • ►  Jan 30 (2)
      • ►  Jan 29 (1)
      • ►  Jan 28 (2)
      • ►  Jan 27 (1)
      • ►  Jan 26 (2)
      • ►  Jan 24 (4)
      • ►  Jan 23 (1)
      • ►  Jan 22 (1)
      • ►  Jan 21 (1)
      • ►  Jan 20 (1)
      • ►  Jan 19 (1)
      • ►  Jan 17 (2)
      • ►  Jan 16 (2)
      • ►  Jan 15 (3)
      • ►  Jan 14 (2)
      • ►  Jan 13 (2)
      • ►  Jan 12 (1)
      • ►  Jan 10 (2)
      • ►  Jan 09 (1)
      • ►  Jan 08 (2)
      • ►  Jan 07 (3)
      • ►  Jan 06 (2)
      • ►  Jan 05 (2)
      • ►  Jan 03 (3)
      • ►  Jan 02 (2)
  • ▼  2019 (162)
    • ►  December (47)
      • ►  Dec 31 (1)
      • ►  Dec 30 (1)
      • ►  Dec 29 (1)
      • ►  Dec 27 (2)
      • ►  Dec 26 (1)
      • ►  Dec 24 (1)
      • ►  Dec 23 (3)
      • ►  Dec 22 (1)
      • ►  Dec 20 (2)
      • ►  Dec 19 (3)
      • ►  Dec 18 (2)
      • ►  Dec 17 (3)
      • ►  Dec 16 (1)
      • ►  Dec 15 (1)
      • ►  Dec 13 (5)
      • ►  Dec 12 (1)
      • ►  Dec 11 (2)
      • ►  Dec 10 (2)
      • ►  Dec 09 (1)
      • ►  Dec 08 (1)
      • ►  Dec 06 (4)
      • ►  Dec 05 (1)
      • ►  Dec 04 (2)
      • ►  Dec 03 (2)
      • ►  Dec 02 (2)
      • ►  Dec 01 (1)
    • ►  November (47)
      • ►  Nov 29 (3)
      • ►  Nov 28 (2)
      • ►  Nov 27 (1)
      • ►  Nov 26 (3)
      • ►  Nov 25 (1)
      • ►  Nov 24 (1)
      • ►  Nov 23 (1)
      • ►  Nov 22 (3)
      • ►  Nov 21 (3)
      • ►  Nov 20 (2)
      • ►  Nov 19 (2)
      • ►  Nov 18 (2)
      • ►  Nov 17 (1)
      • ►  Nov 15 (2)
      • ►  Nov 14 (2)
      • ►  Nov 13 (3)
      • ►  Nov 12 (1)
      • ►  Nov 11 (1)
      • ►  Nov 10 (2)
      • ►  Nov 08 (2)
      • ►  Nov 07 (1)
      • ►  Nov 06 (1)
      • ►  Nov 05 (1)
      • ►  Nov 04 (2)
      • ►  Nov 03 (2)
      • ►  Nov 01 (2)
    • ►  October (17)
      • ►  Oct 31 (2)
      • ►  Oct 30 (2)
      • ►  Oct 29 (2)
      • ►  Oct 28 (1)
      • ►  Oct 27 (3)
      • ►  Oct 26 (2)
      • ►  Oct 25 (1)
      • ►  Oct 19 (1)
      • ►  Oct 18 (2)
      • ►  Oct 05 (1)
    • ▼  September (4)
      • ►  Sep 14 (1)
      • ►  Sep 11 (1)
      • ▼  Sep 05 (2)
        • Why Quantum Computers Won’t Replace Classical Comp...
        • Artificial Neural Networks for Total Beginners
    • ►  August (1)
      • ►  Aug 16 (1)
    • ►  July (3)
      • ►  Jul 24 (1)
      • ►  Jul 20 (1)
      • ►  Jul 12 (1)
    • ►  May (5)
      • ►  May 12 (1)
      • ►  May 11 (1)
      • ►  May 04 (1)
      • ►  May 03 (2)
    • ►  April (8)
      • ►  Apr 28 (1)
      • ►  Apr 20 (3)
      • ►  Apr 13 (1)
      • ►  Apr 12 (2)
      • ►  Apr 04 (1)
    • ►  March (11)
      • ►  Mar 27 (1)
      • ►  Mar 17 (1)
      • ►  Mar 16 (3)
      • ►  Mar 14 (2)
      • ►  Mar 13 (2)
      • ►  Mar 12 (1)
      • ►  Mar 07 (1)
    • ►  February (13)
      • ►  Feb 27 (1)
      • ►  Feb 26 (3)
      • ►  Feb 17 (2)
      • ►  Feb 16 (1)
      • ►  Feb 04 (5)
      • ►  Feb 03 (1)
    • ►  January (6)
      • ►  Jan 24 (2)
      • ►  Jan 21 (1)
      • ►  Jan 20 (1)
      • ►  Jan 15 (1)
      • ►  Jan 12 (1)
  • ►  2018 (23)
    • ►  December (7)
      • ►  Dec 31 (1)
      • ►  Dec 29 (1)
      • ►  Dec 16 (1)
      • ►  Dec 12 (1)
      • ►  Dec 05 (1)
      • ►  Dec 04 (1)
      • ►  Dec 02 (1)
    • ►  November (11)
      • ►  Nov 29 (1)
      • ►  Nov 24 (3)
      • ►  Nov 21 (1)
      • ►  Nov 20 (2)
      • ►  Nov 18 (4)
    • ►  October (5)
      • ►  Oct 27 (5)
Picture Window theme. Powered by Blogger.