where does intelligence come from?
books for christmas
Earlier today I was in Barnes & Noble, my favorite place to be now that I’ve moved back home. There isn’t much else competing for my attention as I try to avoid social media, mostly through iOS Screen Time settings. I was there to look for Christmas presents as I’ve decided unilaterally, and somewhat tyrannically, that I only buy books for people. And only for family.
A few days ago I had spotted I Am a Cat by Natsume Sōseki or Sōseki Natsume, depending on whether you trust the book cover or Wikipedia. Author names are important as so many with a nom de plume can attest, so this strikes me as horrible marketing. It is also very confusing for those looking to share his work with other people, something I strangely have already tried to do.
As is my ritual when considering a purchase at B&N, I read about 5% of the book while standing in the aisle, clutching a mostly watery latte from the accompanying Starbucks? It’s a question because they don’t accept Starbucks gift cards, staunchly via signage, and often lack specialized drinks you’d get at a “normal” Starbucks. You also can’t collect those points people always zealot about. The B&N “Starbucks” pastries are always better though, so that’s a plus, and they accept the B&N membership for discounts, if you’re one of those people, like me. Standing with a latte reading for hours in a bookstore counts as exercise, in my opinion, because the weather in Southern California is too hot to go outside despite being December. Due to the heat and general malaise, I can’t be bothered to work out in any structured way beyond the occasional push-up to reassure myself I still exist. Something commonplace amongst most millennials. Just ask.
I Am a Cat reminds me of my dad because of Charlie and Moris. Cats I now live with. Cats that annoy me constantly, Moris more than Charlie. Cats he nevertheless cherishes roughly on par with his human children. Cats he shares in common with his new girlfriend who I call “Snuff,” and who has two of her own, Cleo and Oliver. Snuff sometimes also fosters kittens for the fun of it and has a room dedicated to the task. What I loved about the book was its skepticism toward humans, delivered from a cat’s perspective, about how humans are mostly trash until they aren’t, a sentiment I wholeheartedly agree with. I grabbed the first copy I saw, trying not to drop my latte, and moved on.
I headed to my favorite section: science. Not science fiction. Actual science. The kind with graphs and math and authors with the Dr. prefix. Most of the titles made my eyes roll because they were about AI or some permutation of AI, most of which I’d already ruled out or read, and I’m sorry, but I don’t care about Sam Altman, Elon, or any of the other shitheads monetizing what should be a common-good tool, like the internet, rather than monopolizing, rent-seeking platforms designed to vacuum up other people’s thoughts, ideas, and creativity. Most of the books in science under “new technology” are basically biographies of the current top AI players, rather than subject matter elucidating model peculiarities, infrastructure strategies, or a meaningful change in emergent capabilities.
Anyways, there wasn’t anything I could find in science that my dad would like except maybe a book called How to Invent Everything by Ryan North. If you haven’t heard of it, it has a strange, comic-book-style cover, which tracks given the author’s roots in cartooning for Marvel. But why it was shelved under science and not science fiction was unclear. More confusingly, the book presents itself as a literal manual for time travel and uses that framing to walk through the history of civilization. No way it can compete with Sapiens, I thought, but I kept reading all the same.
The problem with How to Invent Everything, at least from what I read and later confirmed on Goodreads, is that it’s confidently wrong about some basic things, rather like the hallucinations in GPT tools. It claims, for instance, that beer and bread were created by animals rather than yeast (they were not), and it does not address Arthur C. Clarke’s quote about the lack of time travelers as evidence against time travel. Reviews echoed the same criticism: clever, funny, but not nearly as accurate as it wants to be. I put the book back assuming my dad would be annoyed like me and went to another section.
Sci-fi is usually safe with my dad since he still references h2g2, Hitchhiker’s Guide, almost every time sci-fi comes up. I checked out The Cabinet by Un-Su Kim (interesting premise, odd translated cadence) and Three Californias by Kim Stanley Robinson. The latter intrigued me because my dad had recently been looking for a nonfiction book about California that apparently existed everywhere except the actual shelves. Robinson’s book offered three fictional Californias instead. I read a few pages. Good. Just not him.
I circled back to I Am a Cat and read a Goodreads review that said:
“I recommend reading a bit each night, before sleep, for hilarious dreams.”
My dad likes hilarious dreams. He reads before bed. Decision made.
curiosity and skepticism
Finding a book for my mom is nearly impossible. I’ve seen her buy exactly two books in my three decades of existence, both for me. She admits to reading only one: the Bible, and listening to a few others.
I walked over to the religious section, which I have entered several times before, all accidentally. It’s disorienting. So many Bibles, motivational diaries, near-death experiences, daily Bible studies, and other religious non-artifacts. However, buried within the shelves was a comparative religion subsection that was more comprehensive than you’d expect. It had various books about religion and comparing different religions, and popes, and other things. Notably, a translated Qur’an, which I learned isn’t meant to be read so much as sung, and then a book titled God, The Most Unpleasant Character in All Fiction by Dan Barker, with a foreword by Richard Dawkins, one of my favorite authors.
I read nearly 30% of it while standing awkwardly in the aisle, receiving several uninterested but curious looks. The book builds its case entirely from biblical quotations, God as villain, by His own words. It exists, I learned from the foreword, because Dawkins once wrote, in his book The God Delusion, a series of now-famous adjectives:
“The God of the Old Testament is arguably the most unpleasant character in all fiction - jealous and proud of it, a petty, unjust, unforgiving control-freak; a vindictive, bloodthirsty ethnic cleanser, a misogynistic, homophobic, racist, infanticidal, genocidal, filicidal, pestilential, megalomaniacal, sadomasochistic, capriciously malevolent bully.”
Dawkins posits an entire book would be required to accurately reference each adjective from the excerpt above, and that he, not being a biblical scholar, would take longer than he had time for to write it. He thought of Barker, a reformed pastor, and set him to the task. Barker obviously obliged.
Realizing I hadn’t actually read The God Delusion, I wandered back to the science section to find it, but alas, it was not there. Instead, I found something else. Crypto-Infection by Dr. Christian Perronne. I was originally drawn to it because of the prefix “Crypto-“ being someone who spent the last several years working in the blockchain industry, but then realized it was a neologism by the author used to describe a new or, in his words, specific class of infectious disease caused by the same bacteria that causes Lyme disease, Borrelia burgdorferi. The premise is that this bacteria is causing widespread, underdiagnosed chronic infections all across Europe, but specifically France, the UK, and Germany. I have a particular aversion to ticks, having once had an experience with one on two occasions, spread apart by two hours, in New Hampshire, on the same day. I was convinced the tick was stalking me to find the right place to burrow its head somewhere on my body, akin to a terrible alien parasite incarnated in a sci-fi novel. Eventually, my grandfather plucked it off my arm and smashed it between his fingers. I hate anything that burrows into a person and requires fire to remove. Obviously.
I read several pages of Crypto-Infection, but the most interesting part of what I read had nothing to do with infections or ticks, incidentally. It was a translator’s note. The book was originally written in Dr. Perronne’s native French:
“The embrace of intelligence is proffered with two arms: that of curiosity and that of skepticism.”
I stopped to consider. Intelligence is broad, and two arms seems limiting. Sure, curiosity and skepticism, but surely there’s more to it. Not everyone is curious, and definitely not everyone is a skeptic, as we know merely from the presence of the religion section, just as we can be sure of curiosity from the science section and skepticism, famously, from philosophy.
I thought about the intelligent people I know. Which is… everyone. Every human is intelligent.
Are we all curious? No. Are we all skeptical? Also no.
There’s a huge variety of intelligent people all across the globe with varying skills, and all feel the proffered embrace of intelligence.
My mom, in particular, is neither curious nor skeptical, and she’s one of the most intelligent people I know. She’s said as much my entire life and usually gets annoyed with my questions. But so does everyone. She’s also devoutly religious. So either the translator was wrong, or intelligence isn’t so exclusionary.
After four hours in B&N, I checked out. In the car, with I Am a Cat on the passenger seat, still considering what begets intelligence. After sitting in the car for several minutes contemplating, I decided I’d call my mom to hear her thoughts, given she seemed the perfect counterargument. I called.
After some obligatory discussion about an eye appointment I didn’t want to talk about (see this post), I asked Mom directly about curiosity and skepticism, reading the quote from the translator of Crypto-Infection directly. After being sure she’d heard, as you can never be sure with the spotty reception from Mint Mobile, I waited for her response.
She paused for several seconds. Then said, “Well… I’m not curious or skeptical.” Then, after a moment longer, “Maybe a little curious.” “But mostly I learn from observing. Watching. Listening.”
I asked a few follow-up questions about what that meant for her, and how she would compare observation to things like curiosity. At first, she agreed with me about why curiosity was obvious and the same with skepticism, and then mentioned that sometimes she just trusts, which comes from intuition from observation. Observation does proffer information. And what is intelligence, if not the ability to gather, organize, and apply information?
We briefly touched on faith as a conduit, too, which I explained away by simply referring to the many people who have walked off cliffs believing they could fly, the old tale grown-ups use to explain why doing drugs is bad. We settled on three super-categories: curiosity, skepticism, and attention.
Attention and not observation since attention is mostly applied observation. We ended our call agreeing it was an interesting subject to think about, but mostly a thought experiment, and were both tired from the mental gymnastics. I couldn’t get it out of my mind, though.
Are there other categories? How does intelligence work? Or am I just being curious. Or skeptical. Or attending to myself thinking. I thought, to be continued, and drove the rest of the way home.
food for thought
As I usually do after speaking with my mom or dad or other friends about some lofty thought experiment, I broached the topic with my best friend, to hear his thoughts, but mostly to solidify my thinking on the subject, closing a mental loop.
I read the same excerpt from the translator of Crypto-Infection to him, and asked for his thoughts. He is a highly skeptical person, and equally curious, so I suspected he would have a similar perspective to me, though he might approach the problem of where intelligence is ultimately proffered differently.
I was correct in that he agreed curiosity and skepticism make sense, but he also helped me come up with a few concepts that clarified attention, and how each, when combined, build a strong case to exclude things that do not proffer intelligence. For example, one can apply a simple concept of multipliers where a super-category, take curiosity, can be multiplied by a person’s motivation for a given problem. If a person is very curious about why the sky is blue, thereby sufficiently motivated, then they are likely to build a good amount of intelligence about why the sky is blue through research and determination. On the other hand, if a person is curious about why the sky is blue, but is not motivated to find the answer, then they are unlikely to obtain the intelligence explaining the reason for the color. Put more simply, we can represent this as a math equation where S = the super-category, M = multiplier, and I = the amount of intelligence gained.
I = S * M
For example:
- if I = Curiosity (1) * Motivation (3), I = 3
- if I = Curiosity (1) * Motivation (0), I = 0
The same concept can be applied for all super-categories, like skepticism and attention. If we can quantify the level of intelligence gained by multiplying a super-category, then how can we validate a super-category? Well, if a category cannot be multiplied ad infinitum, then it cannot possibly be a super-category. For example, if we tried organization as a super-category, it becomes obvious that a person cannot organize to infinity, but they can be curious to infinity, and they can be skeptical to infinity. In the case of the sky being blue, you could learn about light, and particles, and the ocean, and any number of unlimited other subjects multiplied by how motivated you are to solve the problem. Similarly, you could be skeptical about God, like who created God, and would likely give yourself a headache, but you could, with sufficient motivation, go at the problem forever, reading innumerable texts. And as mentioned earlier, if you have faith to infinity, you’ll likely end up at the bottom of a cliff, in a different kind of infinity.
What then is the progenitor of a super-category? Is that even a thing? Does motivation or curiosity come first? Well, if you think about it neurologically, a person cannot be motivated without energy, that is from food, exercise, or some other source. Figuratively speaking, you get out what you put in when it comes to the body. So energy must then be what feeds super-categories of intelligence. So with sufficient energy, or calories from food, etc., you could then be infinitely intelligent? It appears possible. Of course, apart from notable exceptions of the human condition being sleep, safety, and age.
To better understand this idea, the “scaling hypothesis,” currently being discussed in the AI infrastructure space, might be helpful. The hypothesis states, to put it simply, that LLMs are as intelligent as the resources they have access to. There are several types of resources the hypothesis considers important such as pre-training data, post-training feedback, and the compute available via raw processing power (currently powerful GPUs). All are analogous to human traits and abilities. For a primer on the topic, I highly recommend The Scaling Era: An Oral History of AI by Dwarkesh Patel.
If AI has been proven to outperform, deterministically, based on the scaling hypothesis by reducing loss thereby increasing accuracy with greater access to resources, then maybe the same concept holds true for individual human intelligence? A so-called “human scaling hypothesis.” Patel talks about this briefly in his book, when contrasting brain size and circuitry with other mammals and non-mammals throughout history, and it makes sense.
Could it be that one can deterministically calculate intelligence by simply generating a coefficient of superset categories and multipliers? Can we, as individuals, make ourselves more intelligent simply by finding our preferred super-category, and then fueling our multipliers with the right kind of energy?
Lofty questions, yes. Mental gymnasium activated, yes. All just food for thought, also yes…pun intended 😉
secondary thoughts “on entertainment”
My day started before Barnes & Noble.
It started with a recruiter message about a founding engineer role at a startup called camfer. This is how professional curiosity manifests for me now: a compelling title, an ambiguous mandate, and just enough information to force unpaid diligence before even deciding whether interest is warranted. Applying cold doesn’t work anymore, and referrals don’t seem to pan out like they used to, so for better or worse, the best strategy seems to be to create a fishing pole profile, anglerfish the best possible opportunities by creating content, engaging with platforms, and sharing your life’s story. Hiring and recruiting, on both sides, has become strangely asymmetrical. Recruiters initiate, candidates investigate. Recruiters narrate; candidates verify. That or you get screwed. Don’t verify and you might be in a nightmare job, get laid off, or worse, and have to move back in with your parents. Try cold applying, and you’ll end up having to research a different kind of black hole…
I hadn’t heard of camfer, though the recruiter did mention a recent Y Combinator raise and CAD plus AI, which was intriguing. I needed more information to decide if I should spend valuable time and attention learning what I needed to in order to have a decent first interview. In my research, I came across a blog post by one of the founders titled “on entertainment.” It was the very first post from the founder. A common thing I do now is to look for the first post on a blog, as I find it is an indicator of what to expect from the author(s) and their content. I read it quickly at first, then again more slowly. It wasn’t really about entertainment in the conventional sense. It was about what entertainment has become optimized for, and more importantly, how the decisions made in the name of optimization are having real consequences, especially for younger generations.
“on entertainment” articulated something I’ve felt but hadn’t fully named: modern entertainment is engineered to monopolize attention so that it can sell you things you probably want but also, probably, don’t need. All based on you, as a person, which is obtained through an aggregate of your life via browsing history and social media, in the form of an advertisement ID. Scary.
The concept of the modern entertainment economy followed me into B&N, and it framed everything that came after, especially my thoughts on intelligence, reading, and the convergence of AI tools with targeted ads.
I’ve been thinking about and trying to design more incentive-aligned systems for a while now, including the Autophage Protocol and my more recent writing on AI privacy and safety. In Autophage, I proposed a system where activity measurably benefits human health by incentivizing movement, adherence, and participation, and in AI privacy and safety, I propose a system that rewards users for their prompts and engagement.
That same logic could apply to attention, more broadly.
Right now, attention is extracted, refined, and sold. Platforms profit from it, effectively stealing energy through attention. Models improve because of it. Yet the human providing it remains uncompensated, unacknowledged, and increasingly shaped by the very systems monetizing their cognition.
This imbalance becomes more acute with AI.
Large models are trained, in part, through Reinforcement Learning from Human Feedback, including passive behavioral signals like what users accept, reject, refine, or abandon. This is labor, even if it is unrecognized as such. It improves systems. It increases their value. And yet users are neither paid nor meaningfully protected. Their prompts may be logged, their preferences inferred, their cognition quietly folded into optimization loops they do not control, and they pay for the pleasure.
In my writing on AI privacy and safety, I argued that this is both an ethical issue and, in some cases, a technical oversight. If users are participating, actively or passively, in the improvement of these systems, they should be compensated, as compensation is monetarily valuable to the individual, especially projecting into a future where AI tools are ubiquitous and attention as a cognitive resource more commonplace, and an acknowledgment of agency, restoring some semblance of symmetry. In the absence of that, we’d be in a world of hurt as a group of people, “humans,” given the moderation of these tools and their cognitive pull on our other capabilities like curiosity and skepticism. Deepfakes, fallacious news, personalized worlds, and agent-to-agent protocols already exist. How can the next generation hope to compete when they are already primed to feed an intelligence that continues to grow in performance and capability before they’ve even been able to get a job?
Which brings me back to intelligence.
In wandering the bookstore and talking with people I trust, I kept circling three candidates for what actually proffers intelligence: curiosity, skepticism, and attention.
Attention is the easiest to measure, which is why institutions fixate on it. Standardized testing, credentialing, performance metrics, blah, blah, blah. Attention alone is inert.
When most discretionary attention is consumed by infinite feeds, attention ceases to be a precursor to understanding and becomes a terminal state. It anesthetizes curiosity and renders skepticism socially inefficient. Why doubt when the next stimulus is already loading?
From that perspective, doom scrolling is a cultural habit that begets an epistemic failure mode.
AI compounds this risk. When answers are instantaneous, fluent, personalized, and confident, the struggle that once produced understanding is bypassed because things like curiosity become short-circuited or dangerously reinforced, while skepticism becomes optionalized and abstracted.
If attention is the currency of the digital age, then the economy we’ve built values attention above other kinds of intelligence, and it is not us who benefit from it.
what’s next?
Is there a future where attention is reciprocally incentivized, like in the dystopian Black Mirror-esque episodes where people’s only job is running on a treadmill they watch ads on, or perhaps where self-driving cars are free because the entire cabin is a glittering, excruciating, blue-light anathema?
What happens to the distribution of wealth or intelligence? Is a benevolent superintelligence powered by an infinite attention economy at the cost of other forms of intelligence more beneficial to the greater good, or is “every person for themself” still the better option if a single entity or entities control that superintelligence?
Are we creating a new kind of oligarchy posing as the companies and products we currently trust enough to use?
Only time will answer these questions. The more interesting question may not be what companies are building, but what kinds of intelligence we are still able to cultivate.
If attention is finite, then choosing where to spend it becomes a deeply personal act, and stealing it possible.
A future that respects intelligence does not require perfect systems or benevolent machines. It looks smaller. Fewer feeds. Slower tools. Interfaces that reward curiosity, skepticism, attention, and other things like creativity, openness, and collaboration. Intelligence should grow because it is abstract, exercised, and shared.
Maybe intelligence has always come from the same place. From time set aside. From attention that is chosen. From the quiet decision to stand in a bookstore aisle and read five percent of a book with no system watching, in an effort to buy a personal Christmas gift that enriches, instead of a bauble served up via algorithmic slop chosen by a machine that has no relationship to the giver or receiver, other than the profit margin it obtained.
contact
Feel free to send through a message, you can find my links here.
As always, 'twas nice to write for you, dear reader. Until next time.