Pretty much every phenomenon in the natural world is analog. What does "analog" mean? The key bits boil down to this phrase in the definition for analog:
...continuously variable physical quantities...
The "physical quantity" can be anything - temperature of an apple, pressure in a balloon, water level of some lake, total internal energy of the third moon of Omicron Persei 8, voltage between two points in an electrical circuit, etc.
To understand the "continuously variable" part, consider the plot of some physical quantity below as it happily marches along in time changing its value.
If we were to zoom in on some part of the curve, we would again see a nice smooth variation. Zoom again, still the same. No matter how far we zoom in, we would see a nice smooth "continuous" variation in the value.
Hurray for nature! But what about digital computers, like our Circuit Playground? Let's look at that next.
You've probably heard the phrase before, something like how everything in a computer is either a 1 or 0, either YES or NO, either ON or OFF, either HIGH or LOW, etc. This is entirely true for digital computers (yes, there are analog computers), and the basic unit of information that stores that 1 or 0 is called a bit. All information must somehow be represented by using one or more of these bits.
If we try to use just 1 bit, we don't get very far. It only allows us to represent two values.
By simply adding a second bit, we can now represent up to 4 different values.
The more bits we add, the more values we can represent. What about 10 bits?
Here's a summary of how many values can be represented by using 1 to 10 bits. Note that the variation is not linear.
So how do we get those nice analog values into our digital computer so we can do cool things with the information? Simple - just use an Analog to Digital Converter (ADC). There's all kinds of technical mumbo jumbo for how an ADC works, but for this guide it's good enough to just think of an ADC as a device where an analog signal goes in and a digital (1s and 0s) signal comes out.
To illustrate this, let's start by considering the gray scale range shown below - this is our analog signal. It is continuously variable (remember that from above?) from some MIN value up to some MAX value.
An ADC will take that and chop it up into discrete (digital) segments. ADCs come in different resolutions - the total number of bits they use. That's why you here people say things like "it has a 10bit ADC". The more bits, the more segments, and the better the ADC can represent (resolve) the analog values.
A 1 bit ADC would be pretty crude. All that beautiful grayness would just become black or white.
Let's add one more bit and see what happens. Here's what a 2 bit ADC would do.
That's much better. We can start to see some grayness. But it's still pretty crude. Let's add one more bit. Here's what a 3 bit ADC would do.
And that's even better.
You can see the trend here. The more bits we use, the closer we get to the original analog signal. If we could use an infinite number of bits, we'd actually get the original signal. But that's just not practical. No one's come up with a way store an infinite number of 1s and 0s on a digital computer.
If we were to feed the signal we introduced at the beginning into a 3 bit ADC, it would get turned into something like shown below. The orange (analog) signal becomes the black (digital) signal.
So why not just use the biggest most bit-est ADC there is for everything? The answer is cost. The ADC hardware will generally cost more, but there is also the consideration of storage cost. Even if money didn't matter and you could buy a 900000000bit ADC, do you have a place to store all those 1 and 0s? The speed of the ADC conversion is another consideration. Hey. Trade offs.
So you wave your engineering magic wand and pick the ADC that is best suited for your application. Or, you just work with what you've been given. In the case of the Circuit Playground, that's a 10bit ADC.