Frequently Asked Questions: January 2026 update
Author
Chris Zombik
Date Published

I frequently ask questions. Usually Google gives me an answer, but sometimes it doesn’t. I assume this happens not because my questions are groundbreaking, but because they are poorly worded due to my limited knowledge of the subjects involved.
I’ve put a few of my questions below. If you happen to have answers to any of these please contact me!
Unanswered Questions
Q: When do you suppose they’ll stop making new Pokemon?
They’re up to 1025 already. Surely it must getting steadily more difficult for Game Freak to design and balance each successive game they make—just look at how mad fans were about the exclusion of many old Pokemon from some recent games.
On the other hand, the demand for new Pokemon games, anime, products, etc. seems pretty durable. So I guess I wonder what they’re going to do as time goes on. Will they have a graceful end the way Marvel Studios stopped making Avengers movies? This seems unlikely, since unlike Robert Downey Jr. there is an endless stream of talent to come in and design the new Pokemon. The more likely situation, then, which seems unthinkable in its own way, is that they’re just going to keep making Pokemon forever until people lose interest. So are we really staring down the barrel of 100 years of Pokemon, getting up into the neighborhood of 3000, 4000… more?
Q: Is there a science to determining the size at which corporations escape regulation?
Consider how oil companies suppressed climate change research. Then consider that I, Citizen Chris, do not have the resources at hand to mislead the entire public about, say, how handsome I am, or whatever else I would wish to mislead people about in order to get what I want out of them. Clearly, somewhere between the amount of resources I possess, and those possessed by Exxon, there is an amount of resources that allows you to change the public’s perception in a way that effectively creates a new reality in which you can continue to exist despite your major flaws.
It seems to me if we could calculate this number—perhaps as a function of an entity’s size (in revenue? cash on hand?)—we could proactively avert this kind of disinformation campaign (as well as more run of the mill regulatory capture by legally limiting the size of corporations to some amount less than the number we decided. Obviously such a function would have more variables than I’ve imagined here, but surely this kind of function could be developed, at least as a way of estimating this sort of thing.
Answered Questions
Q: Why don’t digital thermometers flicker when the temperature is stable?
Obviously in most situations the temperature at a given point in space is never actually stable for long. If I have a thermometer measuring the outdoor temperature starting at 7:00am, I am not surprised when it goes up steadily until 3 or 4 in the afternoon, then turns around and goes back down. But if I have a thermometer measuring the indoor temperature, which is relatively stable, I would expect it to occasionally get to a point where it’s sometimes 68.9º F, then 69.0º F, then 68.9º F again, within a few seconds. And yet I have never seen this on the temperature readout.
I guess there must be some sort of mechanism inside the device to prevent flickering, perhaps by doing some sort of rolling average of the recent temperatures or rounding to a whole number? I don’t know how this would work. Maybe this whole question is misguided. But it really bugs me.
A: The reason digital thermometers don't flicker is because the display only gets refreshed at a fixed hertz.
This is in contrast to the data on the sensor, which is sampling temperature much more often (presumably continuously) than the display is being updated.
Q: Are AIs a black box, or do their creators understand how they work?
If you’re one of the people who worked on, say, AlphaFold, do you know just how the AI does what it does? Can a human look at the solution the machine has crafted and fathom how it arrived at that solution, learn from it, and generalize it? Or is this a situation where there’s just so much statistical inference involved, machine learning that has been trained on untold amounts of data, that it’s totally beyond human comprehension—i.e., a black box?
Relatedly: Consider the three-body problem in physics. We don’t have a general mathematical solution to it. We just do “numerical methods,” which to me (not a mathematician!) sounds a lot like “advanced guess and check.” Contrast that with, say, the Pythagorean Theorem, which we know works for all triangles everywhere always. So another way to think about my question above is, when AlphaFold gives us a solution for how to fold a given protein, do we consider AlphaFold itself to be the “general solution” (in math terms), or is it just using “the power of computers” to guess and check in a clever way?
To put a final spin on this question—if the answer to the question above is “Yes, AI is a black box,” does that imply we have no hope of ever comprehending human intelligence as well?
A: Yes, and it is a problem!
I will put an explanation here as soon as I get one. I am told it is complicated!
Q: Why can humans smell things that don't exist in nature?
Like gasoline, or burning plastic.
It seems to me our sense of smell evolved, like everything else about us, in response to external pressures. So the broader question is, why do we have normal responses to things that humans ought to never have encountered in the course of our evolution?
A: We don’t smell gasoline per se, but molecules in the gasoline that bind to our olfactory receptors.
Thanks to Discord user and PhD holder in the field of Olfaction (which I now know is a thing) Nathan Gouwens for this outstanding answer:
“I don’t know the specific receptors that respond to the different components of gasoline. But there are about 400 different receptors in humans. It’s likely that gasoline causes many of them to be active (and to different extents). So some of the same receptors that might respond to the scent of urine, or grass, or a rose, might be activated by the gasoline odor, but to different extents. And gasoline might cause some receptors to be active that the rose scent doesn’t, and vice versa.
“It’s really the whole pattern of activity across all the receptors that causes a particular odor perception.
“We generally don’t have individual receptors for individual odors - we have a combinatorial system for recognizing many different potential odors - including those we haven’t encountered (like with a synthetic substance)."
Postmortem
The fact that I was struggling with this question in the first place seems like an example of an [XY Problem. The question I was asking wasn’t really the right question at all. It was, fortunately, adjacent to the right question, namely—How do humans smell (gasoline, but also) anything?
This answer yields a followup question, which is less interesting to me but seems eminently answerable: What (substance/molecule/etc.) gives gasoline its particular smell?
Q: Is there a name for the drum beat in the Powerpuff Girls theme song?
I hear it elsewhere, too. Like in Root to This and the first verse of Pinball Wizard.
Whatever it is, it feels distinctly 90s to me, so distinct so that I assume musicians must have a name for it.
A: Yes it has a name!
It’s called the “Funky Drummer” and you can hear it here in the James Brown song by that name, which I am given to believe is where it originated. If you speed it up to 1.5x it sounds shockingly similar to the Powerpuff Girls theme! Thank you to Discord user Steven M. for this answer!
Postmortem
Some questions are difficult—even impossible—to Google because they involve referring to something nontextual. I do not know the proper rhetorical language to describe this distinction. But clearly there is a category of question which is simply “What is X” for a textually-defined X that Google can answer easily by giving me a textual response. For instance, “What is the capital of Florida” or “Who is the star of Titanic.” The Google-ability of the question has nothing to do with factuality, but with textuality, because Google is a textual search engine.
In contrast, I cannot Google “What is this bug” and give Google a picture I just took of the bug on my ceiling. Some apps claim to be able to do this with animals and plants, but they are unreliable. Meanwhile, Shazam and Siri can answer “What is this song,” but I am not aware of any program that can answer “What is the name of the drumbeat in this song.” Presumably all of these seemingly hard queries are being worked on by AI scientists. I don’t know. But I would love to know!