fbpx
On Complex Adaptive Systems, Investing, AI, and What Can Be Modeled

About a month ago I was invited on The Network State podcast. At the time it was my 7th or 8th pod, and I’m still refining vocalizing my written ideas. I wasn’t satisfied with some of the answers I gave on why I’m adamantly e/acc. It inspired me to better explain my framework for thinking on complex adaptive systems, threats, AI, and what can be modeled.

ON MODELS

I like models. I don’t mean ML models, I mean modeling used to predict things. Our society is better for them. In fact, everyone in a way likes models. You like them for what they help you avoid: uncertainty.

Humans would rather have certain bad news than uncertain news. Models try to give a lens into the future, to make it predictable. A model mollifies the future’s uncertainty. Or at least it gives you the comfort that it does.

BUT… some things cannot be modeled. We really hate that. And the mids REALLY hate being told their models are bad, or even harmful! But we still try to model it anyways.

I’m a model respecter for many things. For example: investing, but not all investing. You want to show me a DCF model for your Walmart or insurance company thesis? Great, would love to see it. Those have a history to reference, reliable income figures, predictable cash flows and growth, and can be modeled with confidence. We have a precedent we can see and inputs we can inject with high probability of predicting the future. Good model.

I am not interested in seeing your DCF of Cloudflare. Or Nvidia. Or Ethereum. I also would be completely uninterested in seeing your model of Amazon in 2006. I don’t care how astute and well-researched it was, because you completely missed AWS.

And it’s not your fault, no one could predict a business better than Amazon would sprout out of Amazon. I bet a model of Yahoo and Google in 2003 would have given the nod to Yahoo. Technology investing is a winner-take-most, power-law-on-steroids game that incorporates the oscillations of human behavior, trends, and impulses.

In this domain, your financial model is wrong.

Growth tech investing (what crypto is btw) has to factor in these S-curves and power-law dynamics. It has to predict human adoption trends and new behaviors manifesting. You can’t model these things. I’m sorry.

Know your limits. Humans are not physically or technologically capable of modeling complex adaptive systems. I am a disrespecter of complex-adaptive-system modeling. In fact, I find it to be sometimes dangerous. Pic related.

Complex adaptive systems have numerous variables you must inject and extrapolate to try to predict the future. These extrapolations, by and large, are complete guesses.

They are mostly educated, thoughtful guesses. I’m not smearing humanity’s attempt to learn and navigate our world better. But again, some humility is needed. These are guesses.

And it’s not “what’s Walmart’s revenue modeled 5 years out” guesses. That’s reasonable to forecast. These aren’t actuarial calculations that model a predictable statistical output with remarkable accuracy. Those are not guesses, those are calculations. They are calculations because they are repeatable, reproducible, and predict the future well. When you can do this, you can model it.

You cannot model complex adaptive systems (CAS).

Complex adaptive systems have like 20 different inputs (that you know of) and 4 different Chesterton’s Fences in them that will produce about 14 different 2nd-order effects, 21 different 3rd-order effects, and 11 different 4th-order effects you didn’t even know could happen.

Models for CAS that I disrespect: the economy, the climate, and any ecosystem with a massive amount of independent and dependent inputs and outputs. Before 2020 I wouldn’t have bucketed epidemiology in here, but it’s a worthy new addition to the “you really don’t know what you’re talking about, do you?” team.

You may be able to tell I’m a proponent of a Talebian approach to these systems. You do not put these things under your thumb, and you embed fragility in your attempts to control them. You will try to stifle volatility with your modeling, because all humans innately hate uncertainty. But you don’t stop volatility, you just transmute it.

ON THE AI DOOMER RATIONALE

You may be able to see where my stance on AI doomers stems from. I view them as a particularly upsetting form of hubris and misunderstood threat models.

This is functionally what the doomer decelerationist thesis is predicated on:

Every AI-misalignment precept/extrapolation is based on a series of guesses with say a 5% accuracy rate (and I think I’m being generous with that confidence interval).

Now imagine how that accuracy compounds 8 guesses deep. Input .05^8 into your calculator. It can't even fit on the screen.

I know that’s a pretty autistic way to assess it, but it doesn’t change the reality of the incredibly microscopic feasibility of the assumptions you’re making. The probability of anything close to your AGI storm trooper conclusion is infinitesimally small.

I recognize my frame of analysis has shortcomings (a truism of any analysis). But it's probably right unless we’re dealing with a “this time it’s different” scenario. And, well, yeah.

So when I hear your 7-extrapolations-deep sci-fi paperclip maxxxing situation, I’m sorry but you’re giving me an Amazon DCF calculation in 2003. Oh and also your model thinks Amazon may become sentient and sterilize small children. I’m sorry I don’t read science fiction for either analysis or pleasure.

WHAT’S THE MORE LIKELY THREAT?

What’s the track record of governments and abuse, terror, or just general unbridled incompetence? Has that ever happened before? Do we have any precedent we can point to? ¯\(ツ)

The horde of lawyers and power-motivated bureaucrats, that’s who you want to mitigate this Star Trek threat of yours?

You want to cede control of the most-powerful, impactful technology of our time to them to protect for your .000000045% sci-fi scenario? Unequivocally gtfo.

No. I believe you have a poor understanding of risk. AI has a MUCH higher probability of being lethal if left in the hands of these middling and psychopathic personalty types that disproportionately occupy these positions of power.

AI has a far greater likelihood of being dangerous in a government’s control than it does in the hands of genuinely brilliant engineers trying to create a transcendent technology for humanity. e/acc.

So I’m uninterested in any AI eschatological model for the end of times. I know most mean well, but I don’t find them to be worth the paper it’s printed on. And it will be a breeding ground for those aiming for theocratic capture; we cannot let this happen to AI.

Your fearful extrapolations with microscopic likelihood will unequivocally be exploited by those who seek to control.

They will use your well-meaning predictions to justify an “emergency” in which it so happens this scary technology is safest in their hands (gasp).

Decels who want government control of AI are more dangerous than anything E/acc gives you.

Centralized political abuse of AI is a dramatically more dire threat than your paper clip storm trooper.

Recognize the patterns. Reject the real threat.

Embrace e/acc.

Follow @BackTheBunny

Leave a Reply

Your email address will not be published. Required fields are marked *

Follow the Rabbit
Receive the best content about DeFi, crypto markets and economy trends. No spam - just the good stuff
Follow the Rabbit
Receive the best content about DeFi, crypto markets and economy trends. No spam - just the good stuff
Follow the Rabbit. Receive the best content in your inbox
RabbitX
Follow the Rabbit. Receive the best content in your inbox
Scroll to Top