Why We Haven't Solved Everything
a taxonomy for problems + human-complete problems + why they'll always be around
It is the prerogative of every notable intellectual to formulate their own Grand Theory of Everything.
Usually, this theory is wrong or, more often, vague enough to be unfalsifiable. But this, I’ve argued before, can be a good thing. We’d run out of space for ideas if we required all of them to be true, and correctness is not the best measure for usefulness. There’s a large market for these kinds of ideas, and BeWrong is merely one among many attempts to eschew the right for the interesting.
But I figure that Grand Theories usually start off with a bunch of smaller Big Ideas: equally ambitious, oft-wrong theories that coalesce over the decades to build the narrative that makes up one’s magnum opus. And since one can never start too early, I set out to craft my one of the Big Ideas in what could become my Grand Theory of problems; culminating in the following post.
Note: If, like me, you’re fascinated why the sheer variety and complexity of problems, here’s a great thread that explores a bunch of ideas about the relationships between problems and their solutions, with links to some excellent blog posts included. But onwards!
We begin with a classic move, crafting our own categories. Impressive, absolute statements such as “I’ve decided that there are three, and only three, types of problems” are extremely effective tools of Grand Theorists everywhere because they promise to reshape reality with just a handful of conceptual principles. With those three types are the following:
1) Problems of scarcity: We don’t have enough of a physical resource to go around, which usually leads to deprivation, and violent conflict at worst. The housing shortage, the energy crisis and climate change are all examples of problems that would be solved if just had, or had the ability to make, more of what we needed and distribute it. In absence of such technological improvements or resource discovery…well, we used to die. A lot. Nowadays, we’re well off enough that we can usually make do with something like redistribution or welfare. People still die though.
2) Problems of co-ordination: You can clump these problems under the banner of Moloch, i.e. undesirable equilibriums that arise due to the absence of trust. They’re usually solvable in principle, because they have a clear goal and involve having everyone follow a fairly simple rule (drive on the left side, pay your taxes, etc.). The hard part is getting them to comply instead of defecting. Usually, we use these handy little tools called incentives. Or language. But sometimes these problems get big enough (crime, economies, public infrastructure maintenance) that we call in external authorities (like governments) to enforce coordination, while the libertarians cope and seethe.
3) Human-complete problems: The third class of problem, and the whole point of this essay, is of a more messy sort. The kind of problem whose form shifts and transforms, enough for there to exist no stable solutions, no one simple trick or easy answer. Often, they’re tangled up with questions of meaning and purpose that only make things rougher on would-be solvers.
They should not be confused with the idea that these are problems that only humans can solve. While it is true that the best solutions in this space require human effort, most of the difficult HC problems remain unsolved, even by the smartest, most capable of individuals. Even if computers can’t do things like writing code- oh, um, driving cars, dang it! Making art- shit! Okay, turns out they can’t fix your plumbing or do your nails yet. But my point is that none of those are any more human-complete a problem than eating is. It takes more than just physical dexterity and audio-visual processing to solve truly human-complete problems.
Of course, “human-complete” is too perfect a phrase for it to have been my own invention. It owes its origins to a RibbonFarm post of the same title, in which VGR expounds on the idea in characteristically impenetrable fashion. He uses the concept to explore the connection between being human and the ability to solve infinite game problems. These are games that do not end, because the point is to keep playing, by constantly changing the rules and winning conditions.
Making a living is an infinite game, almost by definition, but I do not consider it a human-complete problem. Because it’s solvable through the relatively simple act of scarcity reduction. It might still continue to be an infinite game (give someone a million dollars and they’ll often just end up working for more) but in the end, it’s still a problem of scarcity. Picking a meaningful career, on the other hand, requires many more steps1 and is human-complete.
Obviously, none of these categories exist independently of each other. Coordination helps us solve problems of scarcity, like when people agree to trade goods instead of growing their own sustenance crops. (Don’t you just love division of labour?). And lower scarcity makes it so that we’re not living on the edge of death, which makes it so much more convenient to cooperate.
It is therefore more accurate to look at all problems as containing aspects of each of the types mentioned above. Solving homelessness would be easier if we could build really tall skyscrapers in all our cities, except…we’re already capable of building them. Huh.
Turns out that it’s the coordination part of the problem that’s hard; some people simply do not want skyscrapers messing up their backyards. And some cities still have to wait till they can do much smaller things like build housing that’s taller than 40 feet. There’s always bits of each category involved.
But solving an HC (human-complete) problem is weirder, because there’s intangibles at play that vary between each case. And solving even both the scarcity and coordination aspects only gets you part of the way towards a complete solution. Heck, one definition of an HC problem could be “unsolvable through regular means”.
However, the definition I prefer to use goes something like this: Human-complete problems are the ones where the optimal solution requires them be re-solved by each individual.
I’ve played around with the definition enough to know that this one doesn’t capture all the required detail, but it will have to do for now.
How do you spot them in the wild?
I like to joke about how the easiest to way to know if a problem is human-complete is when “do better” is the most popular proposed solution.
And, well…it isn’t wrong! Ideally, everyone involved would decide to up their game and we would be rid of these problems pretty quickly. But the real reason for this (often desperate) appeal is the fact that we’ve gone too long without an easy solution, and people got tired of waiting.
So one way of identifying HC problem is by checking if they’re a Lindy problem, i.e. have they been around a fairly long time? Have people accepted (sub-optimal solutions) in the particular case to use generalisable fixes instead? Do you see a general dissatisfaction with the current solutions (like with school), and outliers that absolutely hate them (also like with school)?
HC problems might sometimes look like scarcity problems (“we don’t have enough teachers”) or coordination issues (“bad management is due to poor coordination”), but they all have irreducible human components at their core. You can’t just mass-produce good teachers, let alone effective managers. More knowledge helps, but is not sufficient. Knowing what works/has worked is useful, but not enough. This stuff is hard.
When you see a problem that has multiple competing solutions, you might make the mistake of assuming that the problem is therefore a solved one. Instead, it’s more often the case that they all suck. Because behind those different approaches lies the fact that HC problems don’t have a fixed, single-best solution.
Examples abound, whether they be dating apps, note-taking tools, management courses, and psychological theories. Each of the solutions in the space tries to solve what they think the real problem is, whether that’s “human-friendly algorithms“ or “representations of networked thought“. Alas, tools in these domains are only as powerful as their users. Eventually, the users have to face down the hard parts of the problems themselves.
The sheer variety of places where they show up could fill an entire book.
I managed to limit this list to half-a-dozen examples, which should be enough to get a feel for the shape of an HC problem. But you could probably round out the dozen with a few minutes of thinking.
Education: We put all the word’s books online, and we got 5% MOOC completion rates. We built universities, but they are are rarely places of learning. This absence of positive learning outcomes was the main argument Erik Hoel uses to make his case for aristocratic tutoring. While people have made points both for and against his thesis, I believe he gets one thing right: education is a difficult HC problem. By asking “what is missing from the world of modern education?“, he rightly observes that it goes beyond mere information.
And even if I disagree with most “decline of genius” arguments, my belief is that aristocratic tutoring, or whatever term you use to describe the idea of a 1-to-1 learning process, works. Simply because it is a human solution for an HC problem. Tutors now matter more for how they make their students feel, than how much they can teach them.
To emphasise the human factor at play here even more strongly, I venture to suggest that Hoel, Scott, and even Bloom with his two-sigma phenomenon, are quite possibly mistaking correlation for causation when it comes to achieving impressive learning outcomes. The fact that geniuses had the advantage of tutoring is to the fact that they were embedded in communities of intellectual achievement, with families and/or friends that cared deeply about knowledge and thinking deeply. And that might be replicable through other, less direct, means.
There’s an example from the world of chess: the vast majority of grandmasters began their careers while they were children. So much so that it’s accepted wisdom that it’s hopeless to begin after crossing into adulthood. Without diving too far into my own theory of the tenuous link between neuro-plasticity and age, I suggest that the possibility of early advantage remains incredibly underrated.
Childhood is the only time that most people get to spend unreasonable amounts of time on things that aren’t directly related to staying alive, and starting out in a culture that rewards success in a particular domain is probably the most powerful competitive advantage out there. Whether you call them genius clusters or scenius is besides the point. We need scenes that are built for humans if we’re going to produce excellence, intellectual or otherwise.
The real solution isn’t about having a personalised, perfectly taught curriculum enabled through democratised AI tech, or a programme like Head Start. Those are just a further reduction in the information scarcity problem. AI is not inspiring2, extra classes are not community. Humans need both those things, from the people they live among.
Healthcare: We already know quite a lot about the human body. Not even close to everything we would like to, but enough to know what usually works to keep people healthy. In the case of accident, we can patch you up pretty well too. And most of it is ridiculously low-hanging fruit. Exercise decreases all-cause mortality, air pollution is really bad, supplement your deficiencies, get your grandparents to lift, you know the drill.
In the exact same universe, we also have the obesity crisis, hormonal disruptions and dropping fertility rates, as well as healthcare systems that are more graveyard than solution. Why? Because the existing system-level solutions can only be a bandaid at best. For the most part, health is personal. Apps are great, but they won’t do the fixing for us.
And you can’t mandate the required actions. It’s a free country, and people have the constitutional right to run their bodies into the ground. The most effective thing you’d be allowed to do is fix your air, the people are in charge of the rest. P.S: This is a good idea even if you only care about national productivity.
Testing and matchmaking: I talked about this back in my post on inverted U-shaped curves and testing, but the upshot of it is that devising and implementing accurate tests is…pretty hard. The best tests are more expensive than their less accurate but more automate-able counterparts, because they require people with experience to be part of the entire process. A classic sign of a human-complete domain. Little wonder then, that the matchmaking industry (~$7B), and the hiring market (~$27B) are huge, and growing rapidly. Oh, and those were the numbers for *just* the online versions of those markets.
The solution once again, requires everyone involved to get a lot better at doing things themselves. In this case, they need to improve at signalling, honing both the ability to broadcast accurate and honest signals and the skill of reading them well. Taking the time to know what people are looking for, and being able to make good choices are all human-level improvements that could get us out of the mess that is the hiring/dating/literally anything market.
And if that wasn’t hard enough, signals (like all strategies) fade with time. And making it easier for markets to clear requires a constant application of cross-party effort. The competitive relationship between genuine signallers and deceptive ones is as messy as the adversarial one between signallers and their intended recipients. Once again, no easy fixes.
Community: Infrastructure is important, but if that was all, Facebook would would have been the communal utopia we’ve dreamed of. And well, I don’t think we even got close. Keeping a community functioning is equivalent to a full-time job, no really, “community manages” is now a (well-paid) position at most internet marketing departments.
Everyone wants a community, ain’t nobody want to cook meals for their friends.
Organisational sclerosis: There are many failure modes for large organisations, all leading to institutions that operate more and more inefficiently, and at lower rates of innovation. The most common cause is the onset of the bureaucratic system, whether in structure or spirit.
Bureaucracy is a coordination solution applied to the HC problem of mission dilution. Once a firm crosses from “friends in a garage” to “hiring thousands of paid employees”, the average levels of trust and buy-in to the group mission inevitably drops, and you’ve got to work with things like fixed accountability structures and incentives.
It remains an imperfect solution because it works by abstracting away the individual. This simplifies the process of hiring and management (the human solutions), but it comes at the cost of dynamism.
Discovery: No, your NLP tool doesn’t really work for knowledge creation. Real discovery involves active involvement on the edges of fast-moving scenes, lest you turn into graph garbage.
“You cannot access this information by using the Internet in read-only mode. You have to be in read/write mode and making connections to people, not just information.” - Venkatesh Rao
Being far from the action means that you receive information in polished, pre-packaged chunks, with all the alpha drained out of it. Here, “action” refers to (you guessed it) the people who are also part of thee networks. Algorithms for discovery are deterministic by definition. They can be gamed, subverted or terribly designed. And they leave no place for true serendipity. Real discovery is a recommendation from a friend (or stranger) that happens to match your oh-so-illegible, personal requirements.
These examples are by no means an exhaustive list, but I hope they were representative of the class as a whole. The common threads that run through them include a variety of different desired outcomes, complexity and subjective tradeoffs.
And that’s what makes solving them so difficult.
Not only do you have to find an optimum for for every person involved, but you’re also got to solve each part of the problem from scratch. The annoying thing about human-complete problems is that there is no standing on the shoulders of giants, you’ve go to do the climbing yourself.
But “inefficient” would be the wrong way to describe them. It would be like calling a skyscraper tall, that’s the whole point of their existence! The only reason these things are still problems is because they’re so perniciously tricky to fix with our regular approaches.
Of course, that doesn't stop people from trying. And there are entire classes of failure modes that occur when approaching these issues from a naively optimistic Of which, tech brain is probably the most popular one, thanks to the not-so-subtle bias in media coverage of the industry. But it isn’t an unfounded accusation, you can see traces of reductionism in the places like ed-tech, “tools for thought”, and other attempts to throw a software solution at anything that moves.
Womb tanks were ridiculed because they seem to imply that the only thing responsible for plummeting TFRs is the pain of pregnancy. I mean, sure, we could breed humans like to fill up the slots in our economic machine, but last time I checked, Brave New World was still stocked within the dystopian fiction section.
We’ve also got “spreadsheet brain” for naive self-optimisers, and “powerpoint monkey”as a knock on management consultants with a predilection for silver bullet solutions that came out of a near-arbitrary case study. Dating has the PUA red-pill and astrological compatibility, both on opposite ends of the spectrum of “inadequate solutions” that remain popular in absence of easier options.
Lately, it’s been the crypto scene that’s promised solutions to some of these issues. You see it in the visions for a DAO as a corporate panacea. Alas, we still work with people. You can design your org structure in any way you’d like, and they’d make certain functions easier. But operating at the abstraction of “organization” instead of people is a recipe for failure.
There’s also a certain kind of epistemic trap I like to call the “game theory fallacy” that overstates the impact of incentives, while underestimating the sheer power of irrationally bad/good actors. While incentives are often the ideal solution to coordination problems, they fall short in the case of HC problems like, say, “saving science” or “a madman is in control”. Mutually assured destruction worked pretty well as a deterrent, but all it takes is one slightly deranged dictator to wake up in a bloodthirsty mood and oops, we get to see what nuclear winter looks like.
It certainly doesn’t help that the strongest incentives are overcome by the worst actors. It’s hard for payoff matrices to account for being irrationally evil, foolishly optimistic, or just plain dumb.
Everywhere that human-complete problems rear their head, is a place that incentives won’t solve everything. Nudges do make a difference, but only as much as the people being nudged care.
This, I think, is the crux of the entire affair.
Human-complete problems are matters of human agency, and solutions fail when they try to get around this unavoidable reality.
As with most complex systems, it’s impossible to know ad hoc what a global solution looks like. Instead, it’s individuals that build them from the bottom up, in their own particular ways.
“Instead, I believe that the most promising way to achieve large-scale improvement in the way basic scientific research is organized is to start small, help individual scientists, and to take small steps towards a much better world.” - Alexey Guzey
Since the failures of central planning, capitalism is accepted to be the best economic coordinative mechanism we have, but it would be a lie to claim that it had no discontents. Definite optimism as human capital sees the problem of growth for what it really is, a function of irreducible human factors like, y’know, thinking the future can be worth living in.
In Form is Fake, I lambasted the idea of perfect form, arguing that it was a concept so abstract as to be virtually meaningless in practice. And it makes for the perfect example of a top-down, universal bandaid that ends up being consistently inferior to a personal solution. The blackpill that comes with spotting a human-complete problem is that you become default dismissive of an attempts at universal solutions.
Working with better people is the only real way to avoid getting Goodhart-ed. Trying to change people’s incentives effectively is far harder than just getting them to like you.
A particularly apt term comes from Rohit over at Strangeloop Canon, that of the Idle Kantian. He uses it to describe an ethical stance that goes like this: “if everyone did as I think it would be moral.” For those who’ve solved their personal versions of human-complete problems, the temptation to generalise the solution is strong. And doing so involves a near-total rejection of the idea that problems can be human-complete.
For those with enough experience, the solutions seem simple enough to be summarised as personal heuristics, but that rarely scales beyond ones personal context. This is why most advice in these spaces (dating, education, health) is so bad so often. They try to impart heuristic knowledge without the work done to earn them, with less-than-impressive outcomes.
But perhaps the worst thing we can do, is to double down as Idle Kantians, and claim that a sub-optimal solution (schools, job boards, management school) is the best we can do. In absence of a global solution, what we need is as much variety as we can sustain.
There does seem to be a general distaste for trying to solve problems of meaning with naive abundance (Nozick’s experience machine) or crude incentives (pushback to the concept of homo economicus). But we need to be more suspicious of the very idea of “good enough”. The ceiling for the perfect solutions in these domains is as high as we’re willing to push.
Having to watch people go over the same messy steps that seem so obvious in retrospect is frustrating, but havign the freedom to do so ensures that we can achieve outlier outcomes that go beyond mere efficiency. Individual exploration might seem illegible to everyone who’s outside of it’s specific context, but that’s where the alpha is, especially when you’re dealing with human-complete problems.
And we’ll have to keep solving those, for as long as humans are around.