translate the below to be more understandable:
Thinking about the phase shift of capital allocation in the world of producible reusable intelligence.
It’s very odd
In general, finance operates on ROIC (return on invested capital)
In general, capitalism operates on margins (output sales - input costs)
Bot ROIC and margins rely on fixed intelligence applied to various optimization functions delegated by something like a pre frontal cortex (ceo, committee, etc)
This accounts for time, meaning, each “intelligence” is allocating m.e.a.t (money. energy. attention. time) in such a way to receive more future m.e.a.t inflows (which relies upon the game theory of others’ choices) - past and present choices dictate the flow of future m.e.a.t
If you can take $10B of MSFT capital
And instead input that into producible, reusable, scalable intelligent
It’s not clear how to think about ROIC anymore
even a marginal rate of recursive improvement, over some time length, will outperform over time because their capital (and m.e.a.t) inflows will likely jump up in order of mags in quick succession
(which is roughly what we’re seeing with OpenAI and AI in general)
The landscape/game of capitalism is assumed to be players of roughly comparable intelligence
(you don’t have a chess grand master play against a baby)
If you can take $9B of that $10B and put into intelligence creation
(compute costs)
Should that $9B be $100B? $1T?
Probably humanity will look at that, and say, well we don’t really know what the ROIC is… but it probably makes sense to deploy as much as possible until we glean a sense of the ROIC
fixed here meaning roughly the rate at which humans’ expand intelligence
author note: i don’t necessarily agree with this translation, since I posit it more as a question
while the translator has incentives to maximize m.e.a.t inflows into AI capital allocation
Here’s an attempt to explain those points in a more understandable way:
Traditionally in business, finance and economics, there are assumed to be diminishing returns on invested money and effort. Adding more capital or work yields less and less gain over time.
But artificial intelligence upends these assumptions, because AI abilities compound and scale in exponential ways with more data and compute power.
Put simply, if you take a huge amount of money that would normally be invested across many regular companies and instead focus it all on developing advanced AI, the returns may far surpass spreading it around.
That’s because each increment of data or compute doesn’t just add a linear boost to AI performance. Instead, it unlocks nonlinear cognitive leaps due to the nature of recursive self-improvement.
So the metrics used to allocate capital in a world of human-level intelligence, like return on investment and profit margins, don’t readily apply.
If you have the option to concentrate huge capital into recursively-improving AI systems, it likely makes sense to invest aggressively, even if the returns are highly uncertain.
That uncertainty comes from this being unprecedented territory - it’s like having grandmasters compete against novices. The normal rules don’t guide you anymore.
But the potential capability gains from fueling such exponential self-improvement are likely massive in scale.
Therefore, despite the uncertainty, enormous investment may be justified to push the frontiers of artificial intelligence as far as possible.
In summary, traditional business logic breaks down when general AI becomes possible, due to its unlimited recursive potential. This requires rethinking how capital is allocated to maximize unbounded cognitive gains instead of marginal profit improvements.