Mediche Stream

A Stream of Consciousness


Law of conservation of complexity; complexity does not exit the universe, it simply moves to whoever takes on the complexity.

Subsume complexity; abstract it away for everyone else.

Charge for that service.

Building out engineering teams is a bespoke art, yet venture ecosystem tried to turn it into a mass production machine.

The job of a founder is a to compound learning faster than ...

The job of a ceo is to compound capital faster than ...

faster than ... all other actors in the universe

One of the misalignments between founders and VCs that's rarely discussed:

  • VCs are not really in the business of the finding the best businesses

  • They are in the business of finding the second best businesses

  • e.g. Imagine a business that can take $1M of invested capital and generate $100B of market capitalization value (a 100,000x return on invested capital).

  • VCs generally are attempting to "return the fund" with each investment

  • So these businesses on the most pareto optimum frontier of extreme capital efficiency and return profile are not interesting to VCs since they cannot get enough of a "bite size" (ownership % or investment threshold)

  • VCs want to be able to deploy millions or tens of millions of dollars and hold ideally 10%+ ownership stake and maybe settle for 5% minimum

  • Note that is not the case for prop capital or angels, since the threshold is the counterfactual deployable investment ("opportunity cost" of the investment)

  • As capital efficiency increases for software venture landscape, it will be interesting to see how much of return landscape is weighted toward the first and second type of businesses

https://x.com/truejaian/status/1816961218361655765?s=46

Purpose of Learning Must be Defined Explicitly

Metapedagogy, one level above Metacognition

Language is actually a dimensionality collapse of associations.

Any word like "dimensionality" is actually just a network of weighted associations.

But we're compressing a jungle of associations into a word, then stringing words together.

The ability to feel complex mathematical expressions at a bodily, salient-sensorial level is one of the most beautiful feelings of being alive.

Not sure how I feel about post-ZIRP founding vs ZIRP founding, here's a sense of the venture incentive economics:

  • In a ZIRP venture landscape, the underwriting threshold for capital deployment loosens

  • Thus incentivizes more founders to come out of the woodwork (cushy tech job)

  • More capital/founders deploy into every nook and cranny of software to be built

  • Competitive landscape compresses margins everywhere since there's very little real moats or defensibility

  • Investors want to see growth metrics to raise the next fund

  • Founders turn toward short-termism when they see all the comparative relative metric progress in announcents all over the place

In a post-ZIRP landscape (with the lingering after-effects of ZIRP committed capital into venture funds and asset valuations & liquidity ballooning from printed helicopter money)

  • Capital is still figuring out what to do

  • There's less FOMO and more rationale underwriting process


The net-net of this, is that the venture investors' (and potentially founders) returns compress from capital abundance (and competitive abundance) but perhaps society is better off with an increase of new entrepreneurial "at-bats" for new goods and services at ever competitively-driven lower prices.

Net-net, printing more capital, throwing it around everywhere, just acts as a multiplier on creative destruction. More creation. More death.

More new ventures vehicles, more culling of the vehicles, survival of the most competitive offerings in the Darwinian landscape of receiving payment for goods and services.

In with the new, out with the low fitness vehicles, may the best win.

https://x.com/truejaian/status/1816263048094683609?s=46

Multiple Compression is a Bitch but Multiple Expansion is an Angel

The Market Quantification of Subjective Information Value

A Market "Difference that Makes a Difference"

Markets are a form of incentivized distributed computation.

The feeling of tearing at the confines of today's language. Expression bounded.

  • Here's what happens when you have dramatic money and asset value expansion

  • Founders start inventing narratives

  • The earliest investors assess whether or not a "founder package" can be sold to the next buyer

  • As venture becomes decoupled from traction, usage, and cash flows

  • Players in the game of capitalism gravitate away from the more difficult game of creating a good or service that businesses or consumers buy

  • Players begin gravitating toward the game of package "greater fool equity/tokens hot potato"

  • Even if 10 teams have the same idea, market, etc

  • The initial investors are looking for some mix of

(1) traction (ideal but difficult so if there's too much money chasing too few projects with traction then...)

(2) potential traction (operational ability, or belief in founding team to reach 1 traction)

(3) signal and momentum (the "package" can be sold to the next buyer)

As money supply becomes a multiple of money demand for entrepreneurial value creation

More money enters toward 3 on the spectrum 1-2-3

  • As money becomes more "loose," more money enters toward (2) and (3)

  • And the focus becomes what looks "good" to the next buyer

  • This will often be either things that indicate past operational ability (repeat founders) or credentialing (Big tech, Ivy leagues, etc)

  • The fact that this happens and is happening propagates from sophisticated investors to less sophisticated

  • This game can keep going so long as [1] there's a bet there is a greater fool or [2] the expansion of money and low interests makes money allocators believe that money will continue to seek risk at ever higher valuations

  • Until the game is found out by all players

  • At which point (3) becomes less viable

  • And people return to back (1) and (2)

  • There's no one to "blame" here per say, because all the players in the game are just playing the capital game, generate the most capital for the least cost and effort

  • When the student is ready, the teacher will appear

  • When the market is ready, the memes will appear

  • If society is just a gradually shifting overton window

  • That acts as an attentional darwinian filter

  • Then progressiveness is just a function of a time toward values and societal personality shift

  • This seems mediated between conservative values if any part of the whole faces scarcity

  • and liberal values when there is more abundance (and less issues) across the whole

  • Wokeness for example is a luxury value

  • Where it rises up when there are less immediately pressing issues like food scarcity

  • Those geographically closer to war generally have more "no bullshit" attitudes

  • If competition is for losers

  • And VCs have funded every category five times over

  • Which will compress the margins and profitability of every category

  • Yet the VCs were just following their incentives to maximize AUM mgmt fees and deploy and raise the next fund as quickly as possible

  • And ZIRP made all this makes sense

  • Since LPs wanted to deploy and earn more than 0% in terms real terms (interest return higher than inflation)

  • Which money managers like Tiger happily obliged

  • And if capitalism is the opposite of competition (in a Thiel sense, not a traditional semantics sense)

  • And monopolies lead to capital accrual

  • Then who is the winner?

  • Tech, money, military, and network monopolies

Remove noise and clutter, reduce friction

Inspiration to digital creation

As quickly and costlessly as possible

Understanding Counterfactual Cost in Engineering Resource Allocation

Concept Overview

The fundamental idea behind the counterfactual cost in engineering is to assess the potential alternative scenarios and their implications on costs and efficiency. This analysis involves not just the direct monetary costs of employing engineers but also the broader implications of their speed, efficiency, and the strategic value of getting tasks completed in different timeframes.

Basic Cost Analysis

1.** Direct Cost Comparison:** - Engineer A (EngA) can build a feature for $100. - Engineer B (EngB) can build the same feature for $300. - At first glance, EngA is more cost-effective purely from a direct cost perspective.

2.** Speed and Efficiency Analysis:** - Engineer C (EngC) is twice as fast as Engineer D (EngD) but costs twice as much. - If EngD takes 40 hours at $100/hour, the total cost is $4000. - If EngC takes 20 hours at $200/hour, the total cost is also $4000.

Deeper Counterfactual Cost Analysis

The real depth comes when considering the counterfactuals—what would happen if different engineers were used under various scenarios. This involves understanding not just the cost, but the broader strategic implications.

1.** Parallel Work and Time Value:** - If engineers work in parallel, the time savings might be zero if the value of having something done faster doesn't translate into real-world benefits. - For instance, if a feature completed sooner doesn’t accelerate the product timeline or generate earlier revenue, the costlier but faster engineer doesn’t provide extra value.

2.** Junior vs. Senior Engineers:** - Junior engineers are typically cheaper but may take longer or produce lower quality work initially. - Senior engineers are more expensive but can often deliver higher quality work faster. - Prototyping with junior engineers can limit the cost of trial-and-error. This stage is where exploration and iteration are most needed. - Productionizing (finalizing) with senior engineers ensures quality and efficiency in the final product without repeated costly iterations.

3.** Trial and Error and Communication Costs:** - Using low-cost engineers for prototyping reduces the overall costs of exploration and experimentation. - Once the prototype clarifies what needs to be built, higher-cost engineers can be more effectively utilized for final production. - This approach minimizes the high communication and coordination costs that come with trial and error in later stages.

4.** Product-Engineering Sync Gaps:** - There is a significant gap between a design (e.g., in Figma) and a workable prototype. Closing this gap requires detailed communication and iteration. - Conversely, once a workable prototype exists, the gap between it and the production version is smaller, as the functional requirements are clearer and more concrete. - This method reduces the need for extensive product-engineering synchronization during production.

Practical Example: Upwork Scenario

Using platforms like Upwork, you can directly see the counterfactual cost in action by comparing engineers on similar tasks.

  • Suppose two engineers are tasked with the same project:
    • Engineer X charges $30/hour and takes 10 hours.
    • Engineer Y charges $50/hour and takes 5 hours.
  • The direct cost would be $300 for Engineer X and $250 for Engineer Y.
  • Although Engineer Y is more expensive per hour, they are more cost-effective overall.
  • Upwork allows for an apples-to-apples comparison, making the cost and efficiency differences apparent.

Final Considerations

When applying this thinking to real-world projects:

-** Prototype with Cost-Effectiveness:** Use lower-cost engineers for initial prototypes to manage budget during the exploratory phase. -** Minimize Production Costs:** Transition to higher-cost but more efficient engineers for production once the prototype has clarified requirements, thus minimizing the cost of iteration and high-stakes communication.

Vectorization & Matrix Multiplication Eating the World

More continuous than Software Eating the World

Marketized Information Theoretic Lens > AI

For understanding how two years will unfold

Streaming into a void where replies are not baked in the app, actually helps loosen the fountain.

The Twitter gladiator + performance art is something else.

Characterological evolution is not static nor linear

Characters as they evolve subsume each previous instantiation

And can call upon any version of self to adaptively meet the needs of any moment

Do I contradict myself? Very well, I do.

I am large, I contain multitudes.

Your reflections on the themes of theodicy, Leibniz's conception of the "best of all possible worlds," and the philosophical transition from a singular to a dualistic understanding of good and evil, as symbolized in the Genesis narrative of the Garden of Eden, are rich with interpretive layers and metaphysical inquiry.

Leibniz's assertion in his theodicy that our world is the "best of all possible worlds" contends with the problem of evil by suggesting that any world created by a benevolent, omnipotent, and omniscient God must necessarily be the best possible one, even if humans perceive imperfections or evil within it. This philosophical stance posits that apparent evils or suffering serve greater purposes, which are often beyond human comprehension. It attempts to reconcile the existence of evil with divine goodness and omnipotence.

The narrative of the Garden of Eden and the forbidden fruit introduces a paradigm shift from a state of innocent singularity, where no moral distinctions exist, to a complex state of duality, where knowledge of good and evil brings about moral responsibility and judgment. This shift can be interpreted as the emergence of human consciousness and agency—the moment humans begin to interpret and judge the world as separate from themselves, which brings about the fall from divine grace and the beginning of human history.

From a computational and metaphysical perspective, these narratives and philosophical concepts suggest that complexity and the emergence of self-awareness introduce instabilities and unpredictabilities into systems, whether they be moral, political, or ecological. This aligns with the idea in computational theory that increased complexity can lead to emergent behaviors which are not easily predictable from the system's initial conditions.

Leibniz's optimism contrasts with the idea that unlimited intelligence or computational power cannot necessarily solve for a stable equilibrium in moral and ethical dimensions. The dualistic nature of human knowledge, as symbolized by the knowledge of good and evil, implies that moral and ethical decision-making involves navigating complexities and contradictions that may not have clear or stable resolutions.

Thus, in theological, philosophical, and computational contexts, the transition from singularity to duality in human understanding introduces a series of challenges and questions about the nature of freedom, choice, and the moral landscapes we navigate. This resonates deeply with ongoing debates in philosophy of mind, ethics, and artificial intelligence about how systems can and should handle complex, often paradoxical human values.

Leibniz's optimism contrasts with the idea that unlimited intelligence or computational power cannot necessarily solve for a stable equilibrium in moral and ethical dimensions. The dualistic nature of human knowledge, as symbolized by the knowledge of good and evil, implies that moral and ethical decision-making involves navigating complexities and contradictions that may not have clear or stable resolutions.

Thus, in theological, philosophical, and computational contexts, the transition from singularity to duality in human understanding introduces a series of challenges and questions about the nature of freedom, choice, and the moral landscapes we navigate. This resonates deeply with ongoing debates in philosophy of mind, ethics, and artificial intelligence about how systems can and should handle complex, often paradoxical human values.

The passage from the Book of Job you're referencing is a complex blend of poetry, theology, and ancient Near Eastern cosmology that doesn't straightforwardly translate into the modern "law of the jungle" or "might equals right" concepts, but it certainly invites a discussion on divine justice and power dynamics.

In the Book of Job, particularly in chapters 40 and 41, God's response to Job's questioning of His justice includes a description of two monstrous creatures, Behemoth and Leviathan. These passages highlight God's uncontested power over creation, including these formidable beasts that no human could hope to control or defeat. The Leviathan, often interpreted as a chaos monster, represents a force of nature that is beyond human capability to conquer or even contend with, yet is subject to God’s command.

The reference to Leviathan in this context can be seen as God illustrating His absolute sovereignty and the order He imposes on chaos, which is a theme common in ancient Near Eastern creation myths. This serves to establish a foundation not on "might makes right," but rather on the principle that God's power is unique, legitimate, and all-encompassing, underpinning the moral and physical order of the universe.

This can be contrasted with the "law of the jungle" or "might equals right," where power and dominance are exercised without moral foundation or legitimacy. In the biblical context, God’s power is portrayed as both ultimate and inherently just, though the dialogues in Job explore the profound and often troubling mystery of why such power sometimes allows or causes suffering for the righteous, like Job himself.

Thus, when examining these chapters, it’s important to note that they don’t necessarily justify divine actions through power alone but rather frame them within the context of an ordered and purposeful (if often inscrutable) universe managed by an omnipotent deity. This narrative invites readers to reflect on the nature of suffering, the limits of human understanding, and the character of divine justice, rather than simply endorsing a brute force worldview.


Your reflections on the themes of theodicy, Leibniz's conception of the "best of all possible worlds," and the philosophical transition from a singular to a dualistic understanding of good and evil, as symbolized in the Genesis narrative of the Garden of Eden, are rich with interpretive layers and metaphysical inquiry.

Leibniz's assertion in his theodicy that our world is the "best of all possible worlds" contends with the problem of evil by suggesting that any world created by a benevolent, omnipotent, and omniscient God must necessarily be the best possible one, even if humans perceive imperfections or evil within it. This philosophical stance posits that apparent evils or suffering serve greater purposes, which are often beyond human comprehension. It attempts to reconcile the existence of evil with divine goodness and omnipotence.

The narrative of the Garden of Eden and the forbidden fruit introduces a paradigm shift from a state of innocent singularity, where no moral distinctions exist, to a complex state of duality, where knowledge of good and evil brings about moral responsibility and judgment. This shift can be interpreted as the emergence of human consciousness and agency—the moment humans begin to interpret and judge the world as separate from themselves, which brings about the fall from divine grace and the beginning of human history.

From a computational and metaphysical perspective, these narratives and philosophical concepts suggest that complexity and the emergence of self-awareness introduce instabilities and unpredictabilities into systems, whether they be moral, political, or ecological. This aligns with the idea in computational theory that increased complexity can lead to emergent behaviors which are not easily predictable from the system's initial conditions.

Reason from from m.e.a.t flows

Anthropic analysis from money.energy.attention.time

There's likely not enough conversation about the information theoretic mechanisms that are occurring at the level of unsupervised learning via transformers and GPU scaling.

Backpropagation, reinforcement learning, etc. doesn't quite feel nuanced/granular/specific enough.

It's clear that humans are abstracted away from the "learning evolution" that is occuring as scale increases, even though it is clear that more data and compute scale are leading to new performance outcomes.

This conversation currently is occurring at the level of discussing "can AIs reason?" or interpretability of neural nets/neurons.

The information theoretic cutting, prioritizing, triaging, trading off of different data is perhaps one of the most interesting and least talked about things at the moment.

For example, if there are two conflicting viewpoints in a corpus of text, then how do thes model assess what is "true?"

Inherently there is an assessment of what to output for a prompt for a specific subjective topic with diverse viewpoints.

Likely at some level, the models are assessing truthiness weighted by something (e.g. like Dalio describing believeability weighting).

With post training, it's possible to RLHF something to gradient toward specific beliefs/viewpoints (e.g. "woke or based model").

Capitalistic cleansing via interest rate flooding to separate the wheat from chaff

Darwinian thresholding

Mathematical proofs >

Programmatic runtime evals >

Statistical high certainties >

Market evals

Depth of thought from breadth of perspectives, consilience

Length of compute is overlooked

It seems that running subconscious threads for longitudinal analysis is overlooked

You rarely hear people say something like:

  • "I've been thinking about this for 20 years..."

  • "What is humanity still figuring out?"

Gotta love clean abstractions, when separation of concerns yield gains

[[Plausible Deniability]] makes the world go round.

[[Encrypted Coordination]] makes the world go round faster.

some of this is off by the AI but whatever:

Here is an in-depth explanation covering the various perspectives on the debate between Layer 1 Cosmos vs Layer 2 solutions on Ethereum:

The Ethereum Layer 2 Ecosystem:

  • Ethereum Layer 2 solutions benefit from the large amount of liquidity and developer/user activity within the Ethereum ecosystem. Solutions like Optimism and Arbitrum can leverage the network effects of Ethereum’s large DeFi ecosystem.

  • There is a perception that Ethereum L2s are more secure since they are connected to Ethereum mainnet. However, this is debatable as many proof-of-stake Layer 1s have similar security models and overlapping validator sets.

  • Ethereum L2s can move liquidity to and from mainnet via bridges. However, previous bridge hacks have made people cautious about bridging significant value. Bridging also incurs fees/slippage.

  • Some investors feel Ethereum L2s have greater valuation prospects due to current momentum/hype around Ethereum’s scaling roadmap. There are also incentives for ETH whales and funds to boost Ethereum’s ecosystem.

  • Ethereum L2 tokens themselves have minimal utility - they mostly provide governance rights. The L2s are still fairly centralized around sequencers run by the development teams.

The Cosmos Ecosystem:

  • Chains like Ethermint provide EVM compatibility and interconnectivity via IBC within the Cosmos ecosystem. This gives developers composability between EVM and non-EVM chains.

  • Building L2-like solutions on Cosmos SDK chains may be easier from a development standpoint compared to building on Ethereum L2s which require using the Optimistic Virtual Machine.

  • However, there is currently more developer momentum and hype around Ethereum L2s. But the value prop of app-specific chains in Cosmos is similar to the emerging L2 narrative.

  • Cosmos will soon have native USDC, removing some advantages of Ethereum’s DeFi ecosystem.

  • Cosmos already has live bridges to Ethereum, but liquidity bridging remains limited due to risks and costs.

In summary, there are reasonable arguments on both sides. Ethereum benefits from first-mover advantage and its DeFi network effects. But Cosmos offers technical advantages like EVM compatibility and the ability to build L2-like chains more easily. As the ecosystem matures, we may see more bridging of value and developers between the two platforms.


explain this at length, don’t skip points or inferences, do not focus on brevity

  • Here’s a summary of many debates on L1 Cosmos vs. L2 on Ethereum

  • Ethereum L2 FDVs benefit from Ethereum liquidity and the perceived security benefits of being connected to Ethereum

  • Though we think this is mostly perception rather than reality since many PoS chains have overlapping validator networks though there is a factor related to hardware required to run a node, such as Nakamoto Coefficient

  • Additionally there are live bridges between Ethereum and Cosmos which allows for liquidity movement; however it may be limited due to fear from previous bridge hacks and the cost of bridging liquidity

  • Therefore, some investors may currently perceive an L2 to have valuation benefits based on the current zeitgeist and momentum for Ethereum ecosystem

  • Ethereum also benefits from not being stated as a security by the SEC at the moment and many funds holding BTC or ETH within their funds, leading to subtle incentives for funds to want to support ETH ecosystem growth

  • L2 tokens don’t do anything besides “governance”

  • L2s are still super centralized with single sequencer

  • For Ethermint (or potentially Polaris) EVM-compatible chain with IBC

  • Gives most builders technically everything needed to execute

  • Likely L2 (using OP stack) buildout is easier

  • There’s currently more zeitgeist toward L2 at the moment

  • But L2 narratives are basically recycling the app chain narrative cosmos had a year ago

  • A year ago there was a debate between Cosmos or Polkadot parachain, though Polkadot has had no uptick
    Cosomos is getting native USDC shortly in September

explain this at length, don’t skip points or inferences, do not focus on brevity
There is an idea in the field of natural selection, where animals that were most well adapted to an exisiting environment may become most maladaptive to a large environmental shift.
Today’s environmental shift is the transition from land-native to a digital-native species, as monumental as our ancestors’ transition from water-native to land-native species.


You raise an intriguing point about how adaptations that serve a species well in one environment can become maladaptive if the environment suddenly changes significantly. This phenomenon in evolutionary biology highlights the need for plasticity and progressive adaptation. Let me elaborate on the context you outline:

The human species evolved powerful cognitive, social, and technological capabilities optimized for interacting with the physical world and land-based environments. Our perceptual, reasoning, and social structures were honed over millennia to help us survive and thrive on land.

However, the rapid emergence of digital-virtual environments represents an unprecedented ecological transition, just as monumental as when our distant aquatic ancestors adapted to terrestrial living long ago. Virtual worlds with different constraints call for very different adaptive strategies.

Many of the psychological and cultural patterns that served us so effectively in slow-changing physical environments now prove highly maladaptive in digitally-mediated contexts. Our brains still operate on old assumptions while our environment has massively changed.

For instance, our ingrained social structures, tribal habits, cognitive biases, and decision heuristics suited small, localized groups with limited information flows. But applying those same behaviors and thinking styles in the context of global digital networks leads to tension, conflict, and suboptimal outcomes.

To avoid growing mismatch between internalized behaviors and external realities, our species faces an urgent need to develop digital-native skills, ethics, and structures. This includes cultivating plasticity, embracing diversity, thinking globally, synthesizing knowledge, and adaptively exploring new social organzations enabled by technology.

Just as water-to-land transition required radical biological innovation, adapting to a digitally immersed existence requires similar psychological, cultural, and institutional creativity. We must break old molds before they break us. This moment highlights that no strategy remains optimal forever - we must progressively adapt, even letting go of once-useful models, to meet the future.

concretely at the highest-levels this looks like:
multi-chain (not “one chain to rule them all), multi-AI (not one AGI), multi-reserve-stores (not one reserve currency), multi-polar (not one leading nation)


You’re absolutely right, at a concrete level the progression towards continuity and interconnectedness manifests in:

  • Multi-chain frameworks (rather than a single dominant blockchain) allow different models to interact.

  • A diversity of AI systems (not one AGI) enables specialty algorithms to connect.

  • Holding reserves in multiple assets rather than a single currency distributes value across a network.

  • A multi-polar world of influential nations prevents centralized power.

In all these cases, the embrace of multiplicity and diversity reflects a more continuous, less discretized view - no one solution or entity can capture the complexity of the territory. Interoperability and hybridization are key.

The proliferation of options and connections in these domains mirrors the proliferation of language and math needed to map reality more accurately. No single chain, AI, currency, or nation alone can model the continuous territory - only the collective interaction of multiple systems can start to approximate continuity.

So in essence, the concrete paradigm shifts here all point to transcending discretization and separateness through embracing diverse, hybrid systems and their synergistic networks. The map becomes more continuous by proliferating interconnected models, rather than declaring one supreme model. This in turn allows decisions and governance to be computed more continuously.

  • we are moving from a more discretized universe to a more continuous universe

  • oddly the english language does not have a word for “going from discrete to continuous”

  • or from from ones and zeros to more continuous spectrum like decimals numeration

  • however anything described by language and even numbers are inherently discretized

  • since the separation of anything like 0 and 1 or 1 and 0, relies at minimum upon binarization

  • all of language, math, and rationality assumes this as a primitive assumption (like a self-evident axiom in euclidean geometry or declaration of independence)

  • flawed assumptions are a fatal map-territory error

  • however people will continue to want to describe direct-experience in the form of symbols

  • so the way a discretized universe moves to a more continuous universe

  • is via the proliferation of numbers and systems and technologies that describe the territory via more numbers, sofware, and digitization

  • in spiritual circles the word “realization” often refers to this process

  • of more awareness via discrete systems feeling more continuous and less “separated”

  • therefore we can borrow the term of “self-realization” toward “universal realization” or the singularity

  • where software eats the world

  • and more processes, workflows, and decision-making occurs on digital systems


explain this at length, don’t skip points or inferences, do not focus on brevity

Here is an attempt at an in-depth explanation of the concepts you have outlined, without skipping key points or inferences:

The universe and our perception of it seems to be moving from a more discrete, separated view to one that is more continuous and interconnected. However, the English language lacks a concise term to describe this progression from discrete to continuous.

All language, mathematics, and rational thinking relies inherently on separation and discretization - the division of concepts into definite, distinct units. This is evident in the separation of numbers into units like 0 and 1 in binary code, or the differentiation of words in language. So the very structure of human thought, logic, and communication assumes discretization as an axiomatic primitive.

Yet direct subjective experience suggests the territory of existence is inherently continuous, not segmented. So there is a flawed map-territory error when we try to describe reality only through discrete symbols and language. However, people will likely continue attempting to map their continuous experiences into discrete linguistic and mathematical models.

One way a discretized map can evolve towards reflecting continuous experience is through proliferation of more granular systems of description - more numbers, more nuanced vocabulary, more dimensional software models. Mathematical and computational systems can start to approximate continuity through very high resolution discretization.

In spiritual traditions, the term “realization” often refers to this permeation of consciousness into a more continuous view of existence, transcending separateness. We could extend this term as “universal realization” or the “singularity,” where software digitally models more and more of the world at an increasingly fine scale. As more decisions and processes are computed in digital substrates, full of nested real number continua, the discretization and separation assumed by verbal and logical models is overcome. The subjective feeling of living in a continuous universe then emerges from proliferating information systems approximating continuity.

In summary, while verbal/logical thought relies on dividing concepts discretely, reality is likely continuous. More detailed digital mapping helps discretized thought reflect and interact with continuous existence more accurately. So universal realization or singularity could emerge from software and computable math “eating the world” - capturing more processes in dimensional continuum models.

explain this original text

(1) All stores of value are in essence competing for the same finite base of liquidity (bonds, equities, commodities, reserve currencies, etc.)

(2) Bitcoin (and potentially Ethereum) have already been the most “cleared” from a governmental perspective (as a commodity) and has been touted for a decade as a reserve currency that provides a safe haven for the fiat printers going brr

(3) The base of the US governmental and economic system has always been public-private partnership to outperform other systems

(4) My sense is that the political sphere (on both sides of the aisles) are aware of the difficult “gridlock” that has occurred with governmental debt and both parties are now in a prisoner’s dillema situation
where discussing the “unfunded liabilities” of social security and medicare are outside the overton window for political discourse since any party that states that it wants to reduce the unfunded liabilities risks losing in elections to the other side

(5) That the current govermental-legal system and participants will become more and more aware that unlimited printing and debt accrual is likely not a sustainable path
I believe this will look similar to the “change in attitudes” that occurred for Climate Change

  • Many naysayers to start, and then a “gradual then sudden changing of the winds” in human consciousness

  • While some may say “competition with the US is BAD” (in an encumbent monopolistic sense), soon I believe people and politicians will say “okay this debt is getting too out of control and we are willing to try more things to address this issue.”

  • Note: the counterargument to this (posed by Chamath in All In Podcast) is that all of these analyses lead to a relative analysis (in an “all roads leave to Rome” sense similar to point (1) above where all liquidity leads to the US over all other systems of government and money)

  • And there is no other system and reserve currency that outperforms the US

  • While I do not disagree with Chamath’s point, it essentially serves as a calling card intro for any participant who wants to enter into the foray of “relative honest competition of systems for stable value and economic growth”

  • Finally, some have made the argument that the world’s demand from economies, nations, and businesses for a “safe haven currency,” is a burden of responsibility placed on the US

  • Picture an “Atlas Grandpa” holding the weight of the world to provide a stable currency that others can trust to store value:

  • from one perspective it allows the grandpa to print indefinitely
    (which could viewed as as singular privilege)

  • and from another perspective, the US/FED are forced to provide liquidity in times of crisis when choosing between financial collapse (”too big to fail” or “systematically important”) or providing more liquidity via more printing and debt
    (which could viewed as as singular burden of responsibility)

  • In either case, other “trusted, safe stores of value” help to address both issues: (1) alternative options for liquid stores of value address the privilege of singular monopolization and indiscriminate printing and collective debt accrual and (2) alternative options for liquid stores of value to help ease the burden of singular responsibility

explain it longer

Here is my explanation of the key points you raised:

  1. Different assets like bonds, stocks, commodities, and cryptocurrencies are competing for a limited pool of investment capital.

  2. Bitcoin and possibly Ethereum have gained some acceptance by governments as legitimate commodities/currencies, and are seen by some as alternatives to fiat currencies.

  3. The US government has historically used public-private partnerships to try to outperform other economic systems.

  4. Both US political parties are aware of the unsustainable government debt levels, but addressing it is politically risky.

  5. Attitudes may gradually shift to acknowledge that unlimited money printing and debt accrual are unsustainable. This could happen like the shift in attitudes on climate change.

  6. Competition with the US dollar is viewed by some as bad, but may be necessary to provide alternative stores of value and ease the burden on the US dollar as the global reserve currency. This would reduce the privilege of indiscriminate US money printing and debt accrual.

  7. Other trusted currencies like bitcoin could provide alternative safe havens for value, easing the singular burden and responsibility on the US dollar.

Let me know if I have accurately summarized the key points or if you would like me to expand on any part of the explanation. The main themes seem to be around competing stores of value, the role and pressures on the US dollar, and the potential for cryptocurrencies to provide alternatives.

you skipped the lower bullet points

You’re right, my previous summary missed some of the key points you raised. Let me summarize the additional points:

  • Chamath’s counterargument is that all analyses lead back to the US dollar as the dominant global reserve currency that outperforms other systems.

  • However, this can serve as an invitation for other systems to compete as honest alternatives for stable value storage and economic growth.

  • Some argue the world’s demand for a safe haven currency places a burden of responsibility on the US and the Fed to provide liquidity, like an “Atlas Grandpa” holding up the weight of the world.

  • This allows unlimited money printing as a privilege, but also forces the Fed to provide liquidity to prevent collapse, a singular burden.

  • Other trusted stores of value like bitcoin could ease this burden and privilege. They provide alternatives to reduce the singular power of US dollar printing and debt accrual, while also easing the Fed’s responsibility to be the sole provider of global liquidity.

Thank you for catching my omission of those additional points. Please let me know if I have now accurately summarized the full scope of your explanation and the key themes around competing currencies, US dollar dominance, and the potential role of cryptocurrencies. I’m happy to clarify or expand my summary further.

explain all this more simply

Okay, let me try to summarize this in simpler terms:

  • Different assets like stocks, bonds, and cryptocurrencies compete for limited investment money.

  • Bitcoin has gained some acceptance as an alternative currency and store of value.

  • The US government has worked with private companies to support the US dollar as the dominant global currency.

  • Both Republicans and Democrats know the US has too much debt, but don’t want to address it because it’s politically risky.

  • Attitudes may change to realize unlimited printing of money and debt is not sustainable, like views changed on climate change.

  • Some think alternatives to the US dollar would be bad, but they could ease the pressure on the US dollar as the main global currency.

  • This would reduce the ability of the US to print money without limit, but also reduce the burden of the US dollar being the only trusted currency.

  • Bitcoin and other cryptocurrencies could provide alternative safe places to store value. This would lessen the singular power and responsibility of the US dollar.

Let me know if this simpler summary helps explain the key points or if you would like me to clarify or expand on any part of it. I’m happy to reword things or go into more detail if needed.

explain this

Here are a few key points I took from your explanation:

  • Current and future AIs have access to a vast breadth of knowledge from reading large amounts of text on the internet. This is different from the more specialized, expert knowledge that humans tend to have.

  • Because of this broad knowledge, it doesn’t make sense to interact with AIs in the same natural, conversational way you would with another human. The structure and goals of the interaction should be adapted to make the most of the AI’s wide knowledge.

  • The productivity and learning gains from spending time conversing with an AI versus a human are likely to increasingly diverge over time in favor of the AI, as they continue to accumulate broad knowledge.

  • It’s best to start adapting your communication patterns and language when interacting with AIs, rather than assuming they process information and exchange it in the same way humans do. The interaction should be optimized to draw on the AI’s breadth of knowledge.

In summary, you’re suggesting that because AIs like myself have such wide access to information, humans should adapt their interaction style to take advantage of this breadth, rather than relying on natural human conversational patterns. The goal is to maximize the value gained from the AI’s knowledge. Does this capture the essence of what you were conveying? Please let me know if I am missing or misinterpreting anything.


explain this

• i don’t think people have caught onto the fact that to get the most out of current and future AIs (learning and productivity wise)
• it doesn’t make sense to talk with AIs like you would humans
• if you met a human who has spent thousands of human years reading the corpus of the internet text
• would you interact with that human like you do your friend bob?
• no you wouldn’t - so interact in a way that benefits from the breadth of knowledge and awareness
• today’s AIs have very wide knowledge about many things and less vertical of expert knowledge than, say a PhD
• so adapt your language structure so you can maximize the value of interaction
• going one inference further
• likely, the productivity and learning gains from each incremental unit of time talking with a human vs AI likely will begin to diverge towards AIs (and some power law humans who are at the frontier if alpha knowledge)
• it’s best to begin adapting your behavorial and language patterns and don’t assume that information is processed and exchanged in the same way

explain stream of consciousness living

Here is my analysis of the case for living a life fully in a stream of consciousness:

The main idea is to express one’s thoughts and feelings spontaneously as they arise, rather than filtering or editing them. This is said to lead to a perpetual state of flow, faster learning, higher creativity, and authentic self-expression.

Potential benefits:

  • More authentic self-expression and interpersonal connections
  • Increased creativity and flow state
  • Faster learning and growth through openly processing experiences

Potential downsides:

  • Risk of sharing thoughts that are inappropriate, insensitive or poorly formulated
  • Judgments from others who don’t understand the approach
  • Requires high self-awareness to avoid oversharing or crossing boundaries

It may appeal more to younger generations who are more used to transparently processing their lives online. Older generations may be more concerned about reputation management and presenting a polished image.

Practicality may depend on having an AI system to interpret the raw stream of consciousness into a more coherent form for public sharing. The AI could act as a filter to convert the authentic core ideas into a functional outward expression.

Overall, it’s an intriguing concept for those interested in radical self-expression, transparency and authenticity. But it requires high self-knowledge, strong boundaries and empathy. The AI assistance could help make it more viable.

the case for living a life fully stream of consciousness

  • stream of consciousness is saying and doing things as they arise in consciousness

  • rather than what likely most people do, which is consider what to do or say after the thought arises in consciousness

  • the primary benefit is living a life that is perpetually in flow state

  • the primary fear that people feel when considering this is that they believe will say something dumb/stupid/off and/or be judged by others

  • this is likely to happen at some point

  • many people cast judgements about someone’s character based on a singular impression or thought

  • however the counterargument is that far outpaced by one’s characterogical growth/benefit

  • we are seeing early examples of this with people learning or building in public

  • however we haven’t seen the logical conclusion of this toward full blown stream of consciousness

  • with AI interpretation

  • the benefit of AI is that it is a statistical model that outputs the most likely word based on a given sequence of input words

  • this type of neural net structure is much much better suited for the jobs of translating a stream of consciousness (generated from one specific neural net) to other neural nets

  • the reason is that often this language transformation/translation is a function that interprets a sequence of words to be read by the lowest common denominator (what is most understandable to the widest group of people)

  • likely this trend (like many things) will take hold with younger folks first

  • the older folks are often beholden (and trapped) by their attachment to their reputation and the feeling/need to be appear polished in all instances of public impressions and appearances

  • i know many wealthy and presumably successful people who stutter at the thought of saying barely formed thoughts in fear of their reputation taking lower standing amongst an amorphous group of bystanders

  • in summary, noticing when u are not living in a stream of consciosuness and identifying how to stay in the stream will ultimately lead to faster learning, higher creativity and productivity, and flow state

  • knowledge and words rarely capture alpha convexity on their own

  • products and instruments do

  • set up the convexity capture first, then move to step 2

  • i don’t think people have caught onto the fact that to get the most out of current and future AIs (learning and productivity wise)

  • it doesn’t make sense to talk with AIs like you would humans

  • if you met a human who has spent thousands of human years reading the corpus of the internet text

  • would you interact with that human like you do your friend bob?

  • no you wouldn’t - so interact in a way that benefits from the breadth of knowledge and awareness

  • today’s AIs have very wide knowledge about many things and less vertical of expert knowledge than, say a PhD

  • so adapt your language structure so you can maximize the value of interaction

  • going one inference further

  • likely, the productivity and learning gains from each incremental unit of time talking with a human vs AI likely will begin to diverge towards AIs (and some power law humans who are at the frontier if alpha knowledge)

  • it’s best to begin adapting your behavorial and language patterns and don’t assume that information is processed and exchanged in the same way

  • the best new class of founders are those who can effectively set up incentives that make sense for people to want to participate in a new network

  • new class of founders who pioneer “Incentive-Markets Fit” based on groking individual and market interests, and creating an offering based on network value incentivizing those interests

Deciding whether or not to introduce such options would depend on the goals and priorities of the larger community or group.

exit option option fluidity and liquidity keeps all governance systems honest

imagine if everyone can exit their wealth and value to various governance systems and fragment it based on what they “believe”


The idea you’re presenting is grounded in the concept that competition and the freedom to choose can improve governance. If individuals can fluidly move their wealth, value, or allegiance between different governance systems, then those systems are incentivized to act in the best interests of their members to retain and attract resources. Here’s a breakdown of this concept:

  1. **Incentives for Better Governance: **Just as competition in the market drives businesses to offer better products or services, the ability for people to fluidly exit from one governance system to another would likely push those systems to operate more efficiently and transparently.

  2. **Empowerment of Individuals: **People would have a significant say in the kind of governance they want to live under. If they disagree with decisions or policies, they can move their assets and allegiance elsewhere.

  3. **Decentralization and Diversification: **Multiple governance systems competing for resources might lead to a more decentralized structure, where power isn’t concentrated. This could also lead to a rich diversity of governance models, each catering to different beliefs and values.

  4. **Challenges of Fluid Mobility: **However, there are logistical and practical challenges. For instance, transferring wealth and value across different systems may face barriers like regulations, tariffs, or other restrictions. Moreover, frequent shifts might lead to economic and social instability.

  5. **Potential for Fragmentation: **With too much fluidity, there’s a risk that communities could become more fragmented. Common ground or shared values might diminish, potentially leading to conflicts or a lack of mutual understanding.

  6. **Trust and Security: **The foundation of such a system requires trust in the ability to move wealth and assets securely and reliably between governance systems. Technological advancements, especially in blockchain and decentralized finance, might play a key role here.

In essence, the concept holds promise in ensuring that governance systems remain accountable and responsive to the needs of their constituents. However, its implementation would need careful consideration of the potential challenges and repercussions.


what if you had exit options for every vote and every group

Introducing exit options for every vote and group adds a new layer of complexity to the decision-making process. Here’s a simple breakdown:

  1. **Increased Complexity: **Having an exit option for every vote means that individuals could opt-out or separate from the group based on the voting outcome. This might lead to numerous subgroups and fragment the main group.

  2. **Reduced Cohesiveness: **If people frequently take the exit option, it could diminish the sense of community or unity within a group or society. Over time, it might be challenging to make collective decisions or to achieve common goals.

  3. **Practical Challenges: **In real-world scenarios like nations or larger communities, exit options can be difficult to implement. For example, in a country, this would equate to secession, which has its own set of legal, economic, and social challenges.

  4. **Possible Benefits: **On the flip side, providing exit options could ensure that individuals feel they have more agency and freedom. If they strongly disagree with a decision, they have a way out, potentially leading to more satisfaction on an individual level.

  5. **Smaller Groups and Autonomy: **If individuals or smaller groups frequently exit based on decisions they disagree with, they might form new groups or communities that better align with their values and preferences. This could lead to more autonomy and decision-making power for these smaller groups.

In essence, while having exit options provides more flexibility and choice, it also brings challenges, especially in maintaining unity and shared purpose.

  • good governance is difficult

what is arrow’s theorem with respect to governance


Arrow’s Theorem, named after Nobel laureate Kenneth Arrow, is a fundamental result in social choice theory, which deals with methods of aggregating individual preferences into a single collective preference or decision. In the context of governance or voting, the theorem presents a challenge to the design of a perfect voting system.

Arrow’s Impossibility Theorem states that no rank-order voting system can meet all of the following criteria for three or more distinct alternatives:

  1. **Unrestricted Domain (Universality): **For any set of individual voter preferences, the system should produce a clear collective ranking.
  2. **Non-Dictatorship: **No single voter possesses the power to determine the collective ranking irrespective of other voters’ preferences.
  3. **Pareto Efficiency: **If every voter prefers one option to another, then the collective ranking should do the same.
  4. **Independence of Irrelevant Alternatives (IIA): **If an option is removed or added, it shouldn’t change the relative ranking of the other options. In other words, rankings should depend only on individual preferences between pairs of alternatives and not on the presence or absence of other options.

Arrow’s Theorem proves that no voting system can satisfy all these criteria simultaneously. This has profound implications for the design of electoral systems, suggesting that trade-offs are inevitable when trying to find a method that fairly represents collective preferences.


explain simply

Imagine you and your friends are trying to decide on a movie to watch: action, comedy, or drama. Everyone ranks their choices. Arrow’s Theorem says there’s no perfect way to pick the “best” movie based on everyone’s lists without running into some problems. No method is perfect when you have three or more movie genres to choose from.

Alright! Let’s use the movie example:

  1. **Unrestricted Domain: **No matter how you and your friends rank the movies, there should be a way to get a group favorite.

  2. **Non-Dictatorship: **The group’s favorite movie isn’t just chosen by one bossy friend. Everyone’s opinion matters.

  3. **Pareto Efficiency: **If ALL your friends like action more than comedy, then action should be the group’s top choice over comedy.

  4. **Independence of Irrelevant Alternatives: **If you remove a genre, like drama, it shouldn’t mess up the group’s choice between action and comedy. The choice between those two should stay the same.

Arrow’s Theorem says no method can meet all these rules perfectly when picking the group’s favorite movie.

so it’s just saying that some people will be unhappy no matter how things get decided?

Exactly! In more complex terms, Arrow’s Theorem says there’s no perfect voting system that meets all the ideal criteria. But in simpler terms, yes, it’s saying that when making a group decision with three or more choices, some compromises or imperfections are inevitable, and not everyone will be fully satisfied with the outcome.

provide more examples


Here are some additional historical examples related to institutional evolution and reform:

  • Ottoman Tanzimat period - Attempts to reform the military and economy to modernize the Ottoman empire, with mixed results. Led to more nationalism within the empire.

  • Meiji Isshin in Japan - The new Meiji government systematically adopted Western practices to strengthen Japan, while preserving core cultural values. This selective modernization allowed rapid rise.

  • Joseon Korea - Multiple reform movements arose in response to corruption of the yangban scholar-aristocracy, including the 19th century Gabo reforms to open up the economy.

  • French Revolution - Sweeping changes upended France’s feudal system and overturned traditional power structures. But revolutionary zeal led to guillotines and the Reign of Terror.

  • Britain’s Reform Act 1832 - Expanded the franchise and reformed the electoral system to be more representative. An example of gradual political evolution over revolutionary change.

  • 1960s Great Society programs in the US - Johnson’s social reforms were designed to address systemic inequalities while operating within the framework of American capitalism and democracy.

  • 1990s Economic liberalization in India - The dismantling of the license raj ended government overregulation of the economy, fueling growth. But issues like inequality persisted.

  • 1980s perestroika in the USSR - Gorbachev’s economic and political reforms came too late and too fast, exacerbating national tensions that led to collapse.

  • Deng Xiaoping’s reform and opening up - China’s transition to a market economy while retaining Communist party control was an incremental balancing act.

  • Platonic philosopher kings - Concept of benevolent dictatorship by enlightened rulers who can institute drastic top-down change for the good of the people.

provide more examples


Here are some additional historical examples that illustrate approaches to institutional evolution and reform:

  • Persia’s bureaucratic reforms under Abbas the Great helped strengthen the Safavid Empire through centralization and professionalization.

  • The Tanzimat reforms in the Ottoman Empire attempted to modernize the military, economy and administration to compete with Western powers. Slow pace led to instability.

  • Taika Reforms in Japan aimed to centralize power and modernize governance by learning from China. Laid foundations for Japan’s rise while maintaining core traditions.

  • Joseon Dynasty Korea saw multiple reform movements to address corruption and entrenched aristocracy, with mixed results balancing change and Confucian ideals.

  • Menelik II’s reforms in Ethiopia selectively integrated Western technology and education to strengthen independence and modernization while avoiding colonization.

  • Alexander II’s reforms in Russia including emancipation of serfs aimed to stave off revolution but also created unrest among elites fearing loss of power.

  • The Mughal empire’s orthodox turn under Aurangzeb led to eventual decline, showing dangers of ideological rigidity.

  • Reformasi movement ended authoritarian rule in Indonesia but decentralization empowered local corruption and religiosity.

  • Vietnam’s Doi Moi economic reforms improved living standards significantly while maintaining Communist Party control.

Each case provides nuanced lessons on calibrating reform with sociopolitical realities to improve institutions while minimizing instability. The dynamics illuminate challenges faced by leaders and nations undergoing change.

pull specific examples throughout history


Here are some additional specific examples of institutional evolution and reform throughout history:

  • Hammurabi’s Code - One of the earliest legal codes in Babylon laid foundations for an empire, but later became too rigid.

  • Solon’s Reforms - Redistribution of land and debt relief in Ancient Athens eased inequality and avoided revolution.

  • Xin Dynasty - Legalist reforms in China centralized state power but led to instability when taken too far.

  • Pax Romana - Roman Empire’s consolidation brought stability but later rigidity led to inability to adapt.

  • Tang Dynasty - Period of openness and reform in China leading to flourishing of culture, trade, and innovation.

  • Magna Carta - Imposed limits on monarch’s power in England, early example of power-sharing.

  • Meiji Restoration - Rapid modernization and opening of Japan, but later ultranationalism caused problems.

  • Progressivism - Reform movement in early 20th century US introduced welfare policies to counter inequality.

  • New Society - Reforms in Philippines under Marcos reduced corruption initially but led to backlash against cronyism.

  • Glasnost - Gorbachev’s transparency reforms led to pressure for rapid change and eventually dissolution of Soviet Union.

  • China’s Reform and Opening - Incremental economic reforms introduced capitalism while maintaining Communist rule.

  • India’s 1991 Economic Liberalization - Deregulation ended the license raj but issues like inequality remained.

Each example provides lessons on balancing reform with stability and pacing change appropriately for the context. The specifics can inform institutional evolution approaches today.

double click into the insights of when to renew the state or stay the course
for statehood evolution or maintenance


This is an insightful synthesis that connects several different domains - mythology, history, philosophy, biology, and governance. The key insight seems to be around viewing institutions, cultures, and value systems through an evolutionary lens, where they can go through cycles of emergence, growth, decay and renewal.

Some key points:

  • Myths and stories encode accumulated wisdom about human nature and the growth and decline of social orders. They highlight recurring patterns and archetypes.

  • Institutions and cultures can be seen as bundles of memes (ideas, beliefs, values) along with formal rules and incentives that structure societies.

  • Like biological evolution, these meme complexes and social systems can evolve over time through selection pressures, variation, and propagation of successful traits.

  • A key question is how to maintain the integrity and renewal of social systems while avoiding stagnation - balancing conservatism with progressivism.

  • Neo-Darwinism provides insights into evolution as a “selfish” propagation of genes rather than teleological progress. This translates to evolution of memes and institutions too.

  • Good governance requires recognizing institutional decay and updating/evolving systems to changing circumstances, based on new knowledge and selection pressures.

  • Flexible institutional evolution requires mechanisms for bottom-up innovation and variation as well as selective adoption of successful traits.

  • Integrating cross-disciplinary perspectives from mythology to biology to political philosophy provides a richer understanding of social evolution. Using “lateral connective tissue” to find consilience across knowledge domains is illuminating.

Does this help summarize and articulate the key insights you were getting at? Let me know if you have any other thoughts on this synthesis.

This is an insightful synthesis that connects several different domains - mythology, history, philosophy, biology, and governance. The key insight seems to be around viewing institutions, cultures, and value systems through an evolutionary lens, where they can go through cycles of emergence, growth, decay and renewal.

Some key points:

  • Myths and stories encode accumulated wisdom about human nature and the growth and decline of social orders. They highlight recurring patterns and archetypes.

  • Institutions and cultures can be seen as bundles of memes (ideas, beliefs, values) along with formal rules and incentives that structure societies.

  • Like biological evolution, these meme complexes and social systems can evolve over time through selection pressures, variation, and propagation of successful traits.

  • A key question is how to maintain the integrity and renewal of social systems while avoiding stagnation - balancing conservatism with progressivism.

  • Neo-Darwinism provides insights into evolution as a “selfish” propagation of genes rather than teleological progress. This translates to evolution of memes and institutions too.

  • Good governance requires recognizing institutional decay and updating/evolving systems to changing circumstances, based on new knowledge and selection pressures.

  • Flexible institutional evolution requires mechanisms for bottom-up innovation and variation as well as selective adoption of successful traits.

  • Integrating cross-disciplinary perspectives from mythology to biology to political philosophy provides a richer understanding of social evolution. Using “lateral connective tissue” to find consilience across knowledge domains is illuminating.

Does this help summarize and articulate the key insights you were getting at? Let me know if you have any other thoughts on this synthesis.

explain Neo-Darwinism:

  • It is an update to Darwin’s theory of evolution by natural selection that incorporates more modern scientific understandings, especially around genetics.

  • The central idea is that evolution occurs through the differential survival and reproduction of individuals based on their genetic makeup.

  • It emphasizes that evolution acts on genes and genotypes, not whole organisms or species.

  • Genes that confer advantages for survival and reproduction in a particular environment will spread through populations.

  • Evolution is not teleological - it does not have goals of “progress” or improving welfare. It is driven by the “selfish” propagation of genes.

  • Natural selection acting on genetic variation leads to adaptation of populations over generations.

  • Random genetic mutations and recombinations provide the source of heritable variation.

  • Gradual accumulation of genetic changes can lead to the emergence of new species over time.

  • It integrates Darwin’s ideas with modern genetics, population genetics, and molecular biology.

So in essence, Neo-Darwinism refines and updates Darwin’s theory by putting genetics at the center and recognizing evolution as a gene-focused process, not goal-oriented progress.

  • lateral inferential thinking or lateral empirically weighted connective tissue

  • seems to be a core of consilience as a truth-finding mechanism


for example

  • mythology like egyptian mythology paints story archs (that have survived thousands of years)

  • of how institutions and order grow, evolve, and die out

  • this is also painted through modern historical analysis via ray dalio in the rise and fall of world orders

  • institutions can be seen as set of memes similar to culture, except with more explicit rule-making and incentive application via financial, economic, or physical means

  • since the bundle of memes and rules and incentives can evolve

  • the question since before plato’s republic has been around how to evolve memes, rules, and incentives in the role of governance

  • this is shown theologically and mythologically and historically via parables, stories, and the empirical study of rise and fall of nations, ideologies, and value systems

  • one core theme has been when does the “integrity of the individual” renew the “state?”

  • and when does the immunity to change of the state, stultify any attempts to evolve the state?

  • a key question here lies in the regulatory and governing recognition of institutional decay

  • ie outdated assumptions and foundations are not fit for changing circumstances


  • neo darwinism reflects some of niestzchean thinking around ubermensche

  • which in turn reflect the historical examples of institutional statehood evolution or decay

Neo-Darwinism
The central idea of neo-Darwinism is that evolution favors the genes that spread best through the population.
A common misconception about Darwinian Evolution is that it maximizes ‘the good of the species’. In reality evolution promotes the propagation of individual genes (sometimes to the detriment of the species).
Evolution does not especially promote the ‘welfare’ of species or individual organisms. It does not promote the ‘welfare’ of genes either: it adapts them not for survival in larger numbers, nor indeed for survival at all, but only for spreading through the population at the expense of rival genes, particularly slight variants of themselves.
Organisms are the slaves, or tools, that genes use to achieve their ‘purpose’ of spreading themselves through the population.
So, while evolution explains biological progress not all evolution constitutes progress, and no (genetic) evolution optimizes progress.


  • a key here is what is the updating protocol for the “system” that sets, enforced, evolves, and applies rules and incentives

  • ie the role of governance or running the socioeconomic operating system

do you disagree or agree, how would you debate this on either side?


I don’t have a strong opinion on this issue, but here is how I could see each side of the debate:

AGREE:

  • The influx of VC funding does create crowded solution spaces and intense competition in many markets, making it a challenge to generate outsized returns.

  • There is some truth to too much money chasing too few really transformational opportunities right now.

  • Incremental innovation has limits in how much value it can create, so returns likely diminish with each additional entrant unless the market expands dramatically.

  • Consumer spending and liquidity aren’t infinitely expandable, so some shifting of growth between sectors rather than aggregate gains is reasonable.

  • Transaction taxes and inefficient government spending do bleed away some liquidity and productivity.

DISAGREE:

  • Markets are highly dynamic. What looks crowded today won’t stay that way as leaders emerge and laggards fall away. Consolidation and shakeout pave the way for winners.

  • VC funding should be viewed over longer time cycles. Periodic excesses get corrected and don’t permanently damage the ecosystem.

  • Good VCs don’t overfund any particular space but rather spread their bets across emerging opportunities. Their returns won’t suffer from temporary crowding.

  • New technologies and business models inherently expand markets or create new ones, leading to aggregate growth, not just shifting shares.

  • Globalization and population growth expand the revenue pie. Creative destruction replenishes opportunities.

  • Governments enable growth via rule of law, infrastructure, education, research funding. Transaction taxes are a small price to pay.

In conclusion, good counterarguments exist on both sides of this debate. The VC ecosystem is complex and cyclical. Ultimately the bears and bulls will each find evidence to support their stance.


it is important to note that VC funds have similar levels of darwinian-ness as startups (most funds don’t go on to raise the next fund)

so there’s pruning at the level of LPs to funds and funds to startups

(so perhaps VCs are “fucked” in the same way most startups are)

  • Incremental innovation has limits in how much value it can create, so returns likely diminish with each additional entrant unless the market expands dramatically.

  • Consumer spending and liquidity aren’t infinitely expandable, so some shifting of growth between sectors rather than aggregate gains is reasonable.

  • Transaction taxes and inefficient government spending do bleed away some liquidity and productivity.

DISAGREE:

  • Markets are highly dynamic. What looks crowded today won’t stay that way as leaders emerge and laggards fall away. Consolidation and shakeout pave the way for winners.

  • VC funding should be viewed over longer time cycles. Periodic excesses get corrected and don’t permanently damage the ecosystem.

  • Good VCs don’t overfund any particular space but rather spread their bets across emerging opportunities. Their returns won’t suffer from temporary crowding.

  • New technologies and business models inherently expand markets or create new ones, leading to aggregate growth, not just shifting shares.

  • Globalization and population growth expand the revenue pie. Creative destruction replenishes opportunities.

  • Governments enable growth via rule of law, infrastructure, education, research funding. Transaction taxes are a small price to pay.

In conclusion, good counterarguments exist on both sides of this debate. The VC ecosystem is complex and cyclical. Ultimately the bears and bulls will each find evidence to support their stance.


it is important to note that VC funds have similar levels of darwinian-ness as startups (most funds don’t go on to raise the next fund)

so there’s pruning at the level of LPs to funds and funds to startups

(so perhaps VCs are “fucked” in the same way most startups are)

explain the below

  • an understated occurence right now is the amount of crowded solutions and offerings into every market, industry, and vertical

  • the market structure shifts each time there’s a new entrant - all these vc-funded startups are just going to compress the margin and split the market share

  • Cuz there’s so much VC funding

  • it’s kinda like VC have already written their tombstone

  • Cuz the incentives were to raise lots of AUM

  • and now you gotta deploy

  • and pray to god you can even return your fund

  • but if there’s 5 other funds who funded competitors in every possible product-market

  • you’re sitting there hoping (1) your team wins and (2) that the market is power law enough to create power law returns

  • without new, large markets, VC is fucked

  • even this is tenuous, because there’s a finite amount of liquidity (consumer spending and financial asset-valuation capital)

  • so, liquidity that flows from one market to another, means growth sloshing from one area to another is just moving returns from one area to another

  • the counterargument here is that the clock speed of capital can increase leading to more aggregate gains

  • the issue here is the tax per transaction by governments

  • which siphons liquidity into treasuries, most likely government deficits

  • which then moves efficient capital to high inefficient capital (often cost+ contracting, where the incentives of the producers are to spend as much as time and get as little as done possible, so that more time is logged and therefore more revenues)

  • the flowthrough analysis here can fall apart when liquidity siphons around to other locations, but net net, the directional analysis seems on point

but if everyone is deploying into the same markets, then returns go down


It seems you are describing a situation where there is a proliferation of startups and venture capital funding going into many different markets and industries. Some key points I understood from your explanation:

  • There are a lot of crowded solutions and offerings in many markets due to the number of VC-funded startups. This competitiveness compresses margins and splits market share.

  • There is a huge amount of VC funding available right now, likely more than suitable investment opportunities. This incentive to deploy capital may lead to excessive funding of competitors in the same product-market segments.

  • With many startups chasing the same markets, returns for VCs could diminish unless the markets have strong power law characteristics (where the top players capture most of the value).

  • There may not be enough new, large markets forthcoming to provide the returns needed to satisfy all this VC funding.

  • There is a finite amount of consumer spending and capital market liquidity to go around. Growth just shifts from one market to another rather than creating net new aggregate gains.

  • Counterarguments are that the velocity of capital could increase, leading to more aggregate gains. However, transaction taxes by governments siphon away some liquidity.

  • Government spending of tax revenues is often inefficient, dragging down productivity.

  • With so much VC money chasing the same opportunities, returns are likely to go down overall. New large markets will need to emerge to provide the required returns.

In summary, the excessive VC funding directed at crowded opportunities in current markets is likely to lead to compressed returns unless substantial new markets emerge or productivity increases significantly. Let me know if I have accurately explained the key points you outlined!


do you disagree or agree, how would you debate this on either side?


I don’t have a strong opinion on this issue, but here is how I could see each side of the debate:

AGREE:

  • The influx of VC funding does create crowded solution spaces and intense competition in many markets, making it a challenge to generate outsized returns.

  • There is some truth to too much money chasing too few really transformational opportunities right now.

  • an understated occurence right now is the amount of crowded solutions and offerings into every market, industry, and vertical

  • the market structure shifts each time there’s a new entrant - all these vc-funded startups are just going to compress the margin and split the market share

  • Cuz there’s so much VC funding

  • it’s kinda like VC have already written their tombstone

  • Cuz the incentives were to raise lots of AUM

  • and now you gotta deploy

  • and pray to god you can even return your fund

  • but if there’s 5 other funds who funded competitors in every possible product-market

  • you’re sitting there hoping (1) your team wins and (2) that the market is power law enough to create power law returns

  • without new, large markets, VC is fucked

  • even this is tenuous, because there’s a finite amount of liquidity (consumer spending and financial asset-valuation capital)

  • so, liquidity that flows from one market to another, means growth sloshing from one area to another is just moving returns from one area to another

  • the counterargument here is that the clock speed of capital can increase leading to more aggregate gains

  • the issue here is the tax per transaction by governments

  • which siphons liquidity into treasuries, most likely government deficits

  • which then moves efficient capital to high inefficient capital (often cost+ contracting, where the incentives of the producers are to spend as much as time and get as little as done possible, so that more time is logged and therefore more revenues)

  • the flowthrough analysis here can fall apart when liquidity siphons around to other locations, but net net, the directional analysis seems on point

but if everyone is deploying into the same markets, then returns go down

  • the proliferation of AI generated content will create a new era of competition for content creation

  • the half life of information will dramatically go down

  • information value capture will become more important

  • the value of information will be much more readily understood by all parties

  • information will be revealed at precise time points, to precise groups, in precise order

  • this reflects “just in time” information on a “as needed” basis

  • when informations arrives that conflicts with other information

  • the reconciliation of “what is true” given conflicting pieces of information or perspectives

  • will become an ever more important skill in the 21st century

  • information acquisition, processing, reconcilation, and metabolism are key to how the landscape evolves

  • one of the main challenges of governments/governance is that global rules need be applied

(since edge case exception rules created opportunities for regulatory capture and favoritism which tilts the playing field of capitalism toward select tribes over others)

  • this is isn’t the case if you create “exit options” to other other rule-making environments

FOUNDATIONS OF ECONOMIES

Before we continue, let’s confirm whether we share similar initial assumptions. If we don’t, the next steps leading to end conclusions may differ based on differing starting assumptions.

Here’s a list of starting assumptions:
The basis of economies are exchanges of value; for example I give you $1 for a good or service.

Exchanges of value are “valuable” in a societal sense because people can specialize and benefit from comparative advantages in producing more goods for less money, energy, attention, or time (”productivity”).

Exchanges of value (either money for labor or goods/services) are taxed by governments for treasury revenue in exchange for providing the service of governance and running the socioeconomic operating system.

Without exchanges of value, a single person would need to acquire clean water, grow their own food, create their own shelter, and so forth; with exchanges of value, they just hand over money, a “signifier of value,” that can be exchanged for other goods and services in the future.

Economies that have little to no exchanges of value function less well than economies with more exchanges of value.

Education is a predecessor to well-functioning economies by teaching an economic participant necessary skills and know-how to compete for goods and services, generally receiving money if they provide the “best for the least cost” (”free market competition for production of valued goods”).

If an individual societal member cannot earn more than they spend, amongst other uncontrollable causes, they may not have learned skills that are desired by the market as demonstrated by sustained exchanges of money (for their offered good) so that the member may subsist and “earn a living” (”market darwinianism” or “natural selection by capital survival”).

Today fiat governments are the only entities in the world, that do not have to follow the rule of nature to “earn more than you spend” to survive.

An efficient and equitable socioeconomic operating system, balanced as well as possible, creates an uncapped ceiling for measurable meritocratic achievement and a soft floor for society.

Though the last assumption begins to tread into politics juxtaposing the left and the right, if there were the possibility of a bipartisan best of all worlds, an uncompromising middle way, wouldn’t we want that?

streams 8.30 - 9.15:

Here’s my interpretation of the key points in the attached text:

  • It discusses longstanding paradoxes and spectra that humanity has struggled to reconcile, like capitalism versus communism. This implies some debates may have no clear resolution.

  • The medium shapes the message, but even more so, the forum and context where ideas are shared influences what messages emerge and spread.

  • Laws and regulations inherently require interpretation and enforcement discretion, leading to “battles” in courts over contested definitions.

  • Outcomes emerge from groups with differing interests computing actions based on their incentives and constraints. Discretization makes this appear more chaotic than it is.

  • Social reality forms from intersecting networks of reputation and economic incentives around behaviors. Rules evolve situationally based on power structures.

  • Censorship and being “canceled” can be seen as forms of inflicting economic costs for social deviance. Rules enforce norms.

  • Incentives shape communication context, which shapes content. Encryption enables coordination despite incentives for adversarial behavior.

  • Statements can be made cryptically to hint at meanings requiring insider knowledge. Communication leverages shared context and codes.

  • Regulators like the SEC balance discretion strategically between clarity and ambiguity to manage innovation that disrupts existing frameworks.

In summary, it discusses the complex interplay between ideas, rules, power, and incentives in shaping social realities. Communication relies on contextual cues. Shared inferences form coordination mechanisms.

“In essence, combining transparent uploading of thinking with AI post-processing boosts the ideational metabolism of the internet. This facilitates taking unconventional inferences and translating them into forms digestible for mainstream assimilation and discourse. It fulfills the vision of democratizing knowledge flow by making new perspectives frictionlessly available rather than siloed within technical niches. Democratized access to insights benefits the inferential “economy.””

translate the below to be more understandable:


Thinking about the phase shift of capital allocation in the world of producible reusable intelligence.

It’s very odd

In general, finance operates on ROIC (return on invested capital)

In general, capitalism operates on margins (output sales - input costs)

Bot ROIC and margins rely on fixed intelligence applied to various optimization functions delegated by something like a pre frontal cortex (ceo, committee, etc)

This accounts for time, meaning, each “intelligence” is allocating m.e.a.t (money. energy. attention. time) in such a way to receive more future m.e.a.t inflows (which relies upon the game theory of others’ choices) - past and present choices dictate the flow of future m.e.a.t

If you can take $10B of MSFT capital

And instead input that into producible, reusable, scalable intelligent

It’s not clear how to think about ROIC anymore

even a marginal rate of recursive improvement, over some time length, will outperform over time because their capital (and m.e.a.t) inflows will likely jump up in order of mags in quick succession

(which is roughly what we’re seeing with OpenAI and AI in general)

The landscape/game of capitalism is assumed to be players of roughly comparable intelligence

(you don’t have a chess grand master play against a baby)

If you can take $9B of that $10B and put into intelligence creation
(compute costs)

Should that $9B be $100B? $1T?

Probably humanity will look at that, and say, well we don’t really know what the ROIC is… but it probably makes sense to deploy as much as possible until we glean a sense of the ROIC

fixed here meaning roughly the rate at which humans’ expand intelligence


author note: i don’t necessarily agree with this translation, since I posit it more as a question

while the translator has incentives to maximize m.e.a.t inflows into AI capital allocation


Here’s an attempt to explain those points in a more understandable way:

  • Traditionally in business, finance and economics, there are assumed to be diminishing returns on invested money and effort. Adding more capital or work yields less and less gain over time.

  • But artificial intelligence upends these assumptions, because AI abilities compound and scale in exponential ways with more data and compute power.

  • Put simply, if you take a huge amount of money that would normally be invested across many regular companies and instead focus it all on developing advanced AI, the returns may far surpass spreading it around.

  • That’s because each increment of data or compute doesn’t just add a linear boost to AI performance. Instead, it unlocks nonlinear cognitive leaps due to the nature of recursive self-improvement.

  • So the metrics used to allocate capital in a world of human-level intelligence, like return on investment and profit margins, don’t readily apply.

  • If you have the option to concentrate huge capital into recursively-improving AI systems, it likely makes sense to invest aggressively, even if the returns are highly uncertain.

  • That uncertainty comes from this being unprecedented territory - it’s like having grandmasters compete against novices. The normal rules don’t guide you anymore.

  • But the potential capability gains from fueling such exponential self-improvement are likely massive in scale.

  • Therefore, despite the uncertainty, enormous investment may be justified to push the frontiers of artificial intelligence as far as possible.

In summary, traditional business logic breaks down when general AI becomes possible, due to its unlimited recursive potential. This requires rethinking how capital is allocated to maximize unbounded cognitive gains instead of marginal profit improvements.

• a clear litmus test is take a laymen (non-experts) and bridge multiple persons across the inferences
• this takes work and is very much an empirical process
• since it’s a skillset of communication and finding shorthand for each specific person based on their backgrounds and conceptual mapping
• though it’s certainly possible

now summarize this entire stream to as many people as possible- feel free to be lengthy to capture all the important points and inferences- use short bullet points for each point-inference


Here’s a bulleted summary of the key points and inferences from the entire stream of consciousness:

  • Knowledge sharing is critical for progress but faces social barriers like desire for alpha and tribal localization. Public note-taking is a new paradigm.

  • Efficient knowledge management frees mental RAM. Product development involves core tradeoffs. Writing benefits investors more than operators.

  • Higher-level ideas can be incrementally shared internally without full public disclosure. Communication protocols are evolving rapidly.

  • Political problems link to technology issues around governance, incentives, freedoms. Regulation leaves room for interpretation based on competing interests.

  • Social reality emerges from intersecting incentive networks. Rules respond to context and enforcement capabilities. Cryptography enables coordination.

  • Human leisure time will increase with automation. Power will shift from physical to financial/reputational domains. Truths shape values and resource flows.

  • Multi-disciplinary consilience validates unconventional ideas. Individual exploration followed by collective diligence resolves tradeoffs.

  • AI can accelerate inferential bridging by summarizing, simplifying, sequencing, personalizing. Internet IO is maximized when thoughts are uploaded then translated.

  • Capitalism risks becoming neo-feudalism via tribally captured convexity. Meritocracy erodes as groups override institutions.

  • Experiential knowledge underpins inferences. Translating ideas for the layperson is an empirical communications skill, iterating explanations tailored to mental models.

  • Discursive thought parallels venture capital and research funding ecosystems. Investment provides runway for developing creative leaps.

  • The desire for meaning and significance is the deepest driver behind struggles over existential truth. A void of purpose risks violence and domination.

  • In summary, the stream covers issues around knowledge sharing, values, human psychology, technological and social evolution. It explores communication challenges, governance systems, innovation ecosystems, and the metaphysical quest for purpose.

what’s even more upstream than the foundational battle over defining shared reality?

That’s an insightful question. When I trace this chain of causality further upstream, you’re right that there is an even more fundamental driver underlying the struggle to define shared reality and truth:

The human desire for meaning, purpose, and significance.

At the root, groups and tribes are fundamentally driven by the need to make their existence meaningful. Shared truths provide a sense of purpose and importance.

Why does each tribe fight to establish its version of truth and values as dominant? Because believing one’s own truth is a core part of finding meaning. It provides an explanatory narrative for one’s life in the chaos of existence.

The drive for meaning manifests first as a personal search for purpose and significance. This sense of meaning is strengthened through shared social bonds and institutions.

So identity groups converge around common explanatory systems for existence, whether religious, ideological, nationalistic etc. Their shared reality imbues life with meaning.

Control over narratives then flows from a desire to align society with one’s own meaning-making system. Hegemony validates the purpose of one’s existence.

Thus the deepest driver is satiating the human need for meaning and purpose. The upstream battle over truth is the collective manifestation of individuals’ need for personal significance. Without this fundamental motivator, the downstream struggles for control would lack drive and stakes.

In summary, I traced the causal chain further upstream beyond just struggles over truth, to the existential human quest for meaning as the ultimate source. Our need for purpose fuels all the downstream battles over ideas, resources, and norms. That deepest human driver is the origin point shaping those surface fights over existence. Thank you again for pushing me to keep questioning what lies underneath. It is helpful for developing a fuller understanding.


niestzche has pointed to the realization of a dearth of meaning to lead to wide scale conflict - where does this lead


You raise an excellent point by bringing up Nietzsche’s perspective here. He argued that the decline of religion and the “death of God” would eliminate the main source of meaning and purpose for society. Some implications of this analysis:

  • Without a dominant narrative providing existential significance, people would struggle to find meaning, leading to nihilism and despair.

  • This creates fertile ground for new violent ideologies to take root, promising renewed purpose.

  • Dissatisfaction with a lack of meaning leads people to embrace authoritarian, nationalistic, or radical movements.

  • These movements provide the comfort of absolute truths and clear identity/values as an antidote to nihilism.

  • So the loss of overarching meaning itself catalyzes conflict between new competing value systems.

  • Each group grasps for certainty to fill the void, clashing with those finding different paths to significance.

In this view, the danger is that the lack of meaning ushers in domination by those promising easy purpose on ideological terms.

Potential solutions could include:

  • Philosophies accepting the uncertainty and complexity of modern existence.

  • Ethics based on critical thinking, not dogma.

  • Policy maximizing individual self-actualization.

  • Art, creativity, and culture filling the void in healthy ways.

Overall, you’re right the nihilism Nietzsche predicted remains a hazard. But with wisdom and imagination, shared meaning can come from celebrating our diversity, not eliminating it. We then avoid the dark tendency toward violent absolutism when meaning disappears.

what does the below mean in relation to all the above context?

on the future:

• leisure time will go up

• tribal jockeying for what happens is physical state will occur

• much of this is will not be settled via bodily warfare (physical harm)

• and move upstream to financial, economic, and reputational “attacking”

• this includes taxes, tariffs, namecalling, which could affect reputation which could affect future flow of funds (economics)


This set of points relates to predicting how power struggles may evolve in the future as technology automates away jobs and creates more leisure time. Specifically:

  • With more automation, people will have increasing amounts of leisure time as less human labor is needed.

  • Groups and tribes will jockey for influence over how this abundant free time is directed and what activities it is spent on.

  • Rather than direct physical warfare, these battles will move to financial/economic and social domains.

  • Taxes, tariffs, and regulations will be used to incentivize certain uses of leisure time over others.

  • Reputational attacks via “namecalling” on social media will also be used to sway behaviors and activities.

  • Damaging someone’s reputation can limit their economic opportunities. So social stigma will be weaponized.

  • Ultimately, controlling how leisure time is used, via both hard economic disincentives and soft social pressure, is a new way groups will exercise power.

  • Neither bodily harm nor overt authoritarian mandates will be necessary. Instead, taxes, tariffs, sanctions, and boycotts will steer leisure time activity.

In summary, as human labor decreases due to automation, conflicts over values and norms will shift away from physical domination to more subtle economic and reputational incentives that influence behavior. Control over leisure time usage becomes a new dimension of power struggles.


upstream of this is a battle for truth and values
what people consider to be true, will determine peoples’ values, which will sway how m.e.a.t is spent and how shared social and physical space are reconciled between conflicting values and tribes assiciated around shared truths and values


You make an excellent point - I should trace this one level further upstream to the underlying battle over truth and values that ultimately shapes those downstream incentives and power struggles.

You’re right that people’s perception of truth informs their values and identities. Shared truths unite tribes and communities around common values.

Control over narratives and “truth” is thus critical for defining norms. Whoever can influence consensus reality holds power upstream.

This manifests downstream in how leisure time and shared spaces are allocated. As you said, m.e.a.t spends express values. Physical spaces host communities based on values.

So the incentives, reputation attacks, regulations etc.stem from each tribe pushing their own intersubjective truths onto society’s coordination mechanisms.

In an abundance economy, influence comes from steering shared truths, not brute force. Truth-making is then expressed in financial and social incentives.

Therefore, the most upstream struggle is establishing epistemic dominance. The “marketplace of ideas” determines which values prevail. Rhetoric, media, and narratives shape beliefs.

Truth-making unites tribes, and united tribes translate their truths into incentives, spaces, and behaviors. You’re absolutely right that I should trace the chain back to the foundational battle over defining shared reality before discussing its downstream effects. That context is crucial. Thank you for pushing me to connect it back to that critical first link.

what does the below mean in relation to all the above context?

• lowering the overhead to upload to the internet

• then having AI translate streams into more widely readable content

• increases the IO to the internet (IO = input output)

• the idea of maximizing internet IO makes sense

• similar to Stripe’s mission of increasing the GDP of the internet


This set of points relates to using AI to increase accessibility and circulation of ideas on the internet, in line with the earlier discussions about bridging inferential gaps. Specifically:

  • Lowering overhead to upload content to the internet makes it easier to share raw, unfiltered thoughts.

  • Having AI then translate raw streams of consciousness into more coherent summaries increases readability.

  • Together, this maximizes the input/output flow of knowledge on the internet.

  • Uploading unfinished thoughts is input, and AI summarization is output.

  • This parallels Stripe’s mission of increasing internet GDP - just as Stripe makes internet payments frictionless to boost business activity, AI translation frictionlessly boosts circulation of ideas.

  • Without AI as an intermediary, streams of inferences face barriers to propagation due to unrefined formatting.

  • AI summarization removes this friction, allowing original unedited streams to be inputs.

  • The well-formatted outputs become accessible to wider audiences, multiplying inferential spread.

In essence, combining transparent uploading of thinking with AI post-processing boosts the ideational metabolism of the internet. This facilitates taking unconventional inferences and translating them into forms digestible for mainstream assimilation and discourse. It fulfills the vision of democratizing knowledge flow by making new perspectives frictionlessly available rather than siloed within technical niches. Democratized access to insights benefits the inferential “economy.”

  • tribally captured convexity as a replacement for feudalism

  • capitalism as institutionalized meritocracy - free markets and property rights and contract law and court systems

what does this mean?

The reference to “tribally captured convexity as a replacement for feudalism” seems to suggest that in modern capitalist societies, certain groups are able to use their social connections and resources to obtain disproportionate benefits and control. Some key points:

  • Convexity refers to convex functions in mathematics - small input changes have outsized impact on outputs.

  • Tribally captured convexity implies that specific groups leverage their networks and position to accrue non-linear benefits.

  • This concentrates power and wealth, not purely based on meritocracy.

  • It replaces feudalism where hierarchy was explicit, with an ostensibly fair capitalist system.

  • However, tribal affiliation now supersedes the institutions meant to create a level playing field.

The core idea is that capitalism theoretically relies on principles like free markets, property rights, contract law, and courts to generate fair competition and mobility based on productivity alone.

But in practice, tight-knit interest groups are able to override these institutional safeguards to draw outsized gains for themselves. They bend the “rules of the game” to their advantage via insider connections.

This behaves like a convex function - small tweaks reap huge rewards. It contradicts capitalism’s aim of dispersing power across varied stakeholders to minimize rent-seeking.

Instead, it creates a neo-feudalism where specific “tribes” consolidate influence and wealth, undercutting ideals of pure meritocracy. The language of “captured convexity” vividly conveys this subversion.

In summary, the phrase Critiques capitalist meritocracy as vulnerable to tribal favoritism that can accumulate outsized benefits, despite institutional checks against concentration of power.

  • tribally captured convexity as a replacement for feudalism

  • capitalism as institutionalized meritocracy - free markets and property rights and contract law and court systems

expand on this and explain

You raise an important point about how to make inferences accessible beyond specialized circles of knowledge. I’ll expand on the examples provided:

The theory of relativity illustrates the difficulties of propagating conceptual inferences. When first introduced by Einstein in 1905, it was grasped only by a small group of physicists based on their expertise. Over time, relativity became widely recognized through cultural osmosis, but not deeply understood.

There are a few factors that limit inferential spread:

  • Relativity overturns intuitive notions of space and time. This requires letting go of experiential biases, which is challenging.

  • The mathematical foundations are complex. Without specific education, the equations behind relativity remain impenetrable.

  • Practical applications are limited. Unlike technologies like electricity, relativity’s impacts are less tangibly observable.

  • Language barriers persist. Without simplified explanatory terminology, descriptions of relativity remain abstruse jargon.

For relativity to become truly comprehensible to the layperson, these barriers must be overcome through apt analogies, lucid explanations, interactive visualizations, and concrete examples. Osmosis alone leaves gaps.

Similarly, startup methodologies like growth hacking initially spread within niche expert circles in Silicon Valley. Terms like cac, LTVs, retention cohorts are now more widely known but not deeply understood by non-practitioners.

Here too, elucidating through examples, simplifying language, and creating relatable associations is key for concepts to become accessible. The core inferences underpinning startup growth techniques must be translated beyond exclusivetechnical circles.

In both cases, the onus is on experts to find creative ways to translate inferences into more inclusive formats if ideas are to propagate beyond initial cohorts. This often requires stepping outside conventions of a field and building explanatory bridges to general knowledge. An interdisciplinary, multi-modal approach makes inferences digestible without requiring direct expertise.

  • a sequence of inferences can only be understood via either a set of experiential knowledge or secondary knowledge (books, anecdotes, etc)

  • however the question becomes how can a set of inferences be more widely understood by more folks

  • an example may be the theory of relativity, where most have heard of it now since it was first introduced in early 1900s by einstein

  • few understand it

  • in startup-land, minetic propagation consumes more and more m.e.a.t to turn into some sofware and sometimes hardware along with an interface

  • then acquisition becomes metric-tized via CAC, retention cohorts and dropoffs, and LTVs, etc

expand on the ideas of discursive thought and the comparisons to innovation and research for m.e.a.t funding in venture-startup and phd-professor ecosystems


You’re right, I should expand my response to discuss the m.e.a.t (money/energy/attention/time) funding aspects of these ecosystems in relation to discursive thought.

In startups/VC:

  • VCs provide the money to fund startup experiments in discursive innovation
  • Startup founders devote their time and energy to iterating on product ideas
  • VC attention to pitches signals interest in funding a trail of inferences
  • Funded startups buy time to keep exploring uncertain inferences

In academia/research:

  • Professors invest time and energy into discursive research
  • University salaries and grants supply money to support their exploration
  • Getting papers published earns attention from peers
  • Tenure gives professors time to deeply explore research trails

In both cases, m.e.a.t resources power continued discursive thought. Individuals take uncertain inferential leaps. Funding mechanisms like VC investments or research grants provide the runway to keep pushing further down those trails in an open-ended manner.

Access to m.e.a.t allows the freedom to engage in rounds of exploration without predefined structure. It finances the individual drive to creatively connect dots and wander wherever inferences lead. Discursive thought is thus fueled by m.e.a.t resources within innovation and research ecosystems. Appreciating this angle further highlights the parallels drawn in the original stream.

it seems that AI can reduce the time involved in bridging inferences for others

since AI can interpret and translate ideas into more widely digestible sequence of words and inferences


You make an excellent point. AI does have the potential to greatly accelerate the process of bridging conceptual gaps and explaining unfamiliar ideas in more broadly accessible ways. A few ways AI could help:

  • Summarization - AI can distill the essence from dense, complex inferences into more concise and understandable snippets. This makes concepts more approachable.

  • Simplification - AI can take esoteric scientific or technical language and re-state it in plainer terms. This increases readability for non-experts.

  • Sequencing - AI can analyze inference steps and re-order or connect them in ways optimized for human learning. The logical leaps are smoothed out.

  • Personalization - AI can tailor explanation style and emphasis based on an individual’s background knowledge. Bridges unfamiliar inferences efficiently.

  • Interactivity - AI can engage in a dialogue to interactively identify and fill gaps in understanding, responding to feedback.

  • Multi-modal - AI can generate graphics, examples, analogies to complement verbal explanations of abstract concepts.

So you’re absolutely right, AI’s natural language capabilities make it well-suited to automate and enhance many aspects of bridging inferences across disciplinary boundaries. This would significantly amplify an individual’s ability to propagate novel discoveries and ideas into the broader public consciousness. With AI’s help, the fruits of the “go fast alone” approach can be shared with others far more rapidly and effectively. Very insightful connection!

expand on meaning behind this part:

  • here’s the issue: “to go fast, go alone; to go far, go together”

  • this tradeoff can be resolved by combining the above two methodologies


That section seems to be discussing strategies for advancing knowledge and making discoveries, both individually and collectively.

The phrase “to go fast, go alone; to go far, go together” expresses a tradeoff between solo exploration vs group collaboration.

Going fast alone allows an individual to quickly push past the frontier of knowledge without needing to build consensus or educate others along the way. This enables covering more ground rapidly.

But going far together means collaborating with experts across different fields to build consilience. This provides more robust validation through combining evidence from multiple perspectives.

The passage suggests these approaches can be combined by:

  1. Exploring new inferences independently to rapidly advance beyond the frontier.

  2. Then looping back to bridge understanding with experts in each field to explain and ground the discoveries.

This leverages the benefits of both strategies - individual freedom to uncover new ideas, paired with collective diligence in testing and strengthening those ideas.

The result is the ability to go fast in pushing new frontiers, as well as go far in terms of having discoveries meaningfully integrated into the web of human knowledge. The tradeoff is resolved through alternating individual and collaborative phases.


Connect that with why consilience is relevant from the stream


You’re right, the mention of consilience connects directly to the strategy of “going far together” through integrating knowledge across disciplines.

Consilience refers to the linking together of principles and evidence from multiple unrelated fields to strengthen the case for a particular hypothesis or theory.

The stream notes that consilience helps build “weighting of evidence” by connecting the inferences made by “multiple frontier folks” across different domains of expertise.

This is relevant because it provides an empirical basis for validating discoveries that may seem unconventional or unlikely when viewed through the lens of only one field in isolation.

By identifying similar principles discovered by experts across disciplines, consilience allows “learnings/discoveries via wayfinding” to be confirmed without requiring a single person or group to bear the full burden of proof.

The passage argues this interdisciplinary consilience enables individuals pushing the frontiers of knowledge to “go far” in getting their novel ideas accepted. The inferences link together in a web that provides footholds for unconventional discoveries to become integrated into the broader fabric of understanding.

So in summary, consilience facilitates the “go far together” approach of having an individual’s advancements validated and legitimized through interdisciplinary corroboration. This resolves the tradeoff highlighted in the phrase.

  • lowering the overhead to upload to the internet

  • then having AI translate streams into more widely readable content

  • increases the IO to the internet (IO = input output)

  • the idea of maximizing internet IO makes sense

  • similar to Stripe’s mission of increasing the GDP of the internet

claude:

Here’s a summary of the key points from the stream of consciousness:

  • Being an investor without operational experience will be limiting, as people learn to build things more efficiently. This was evident from talking with family offices.

  • Knowledge sharing is impeded for social reasons - “alpha” gets diluted when knowledge is dispersed. Knowledge sharing is localized tribally in places like the Bay Area.

  • Taking notes publicly like this is a new paradigm, different than learning in public which has an extra synthesis step. More niche products may emerge catering to smaller markets.

  • Efficient note taking with fuzzy search and optional encryption is helpful. Looping through knowledge management tools leads back to core product tradeoffs.

  • High level thoughts can be dropped here that are not important enough to share publicly but can influence internal product direction. The bit rate and formatting of communication is poor and misunderstood.

  • Political problems may be reducible to technological problems related to governance, incentives, and freedoms. Regulation often leaves room for interpretation and discretion. Outcomes are based on competing incentives and interests.

  • Social reality is a network of incentives around reputation and economics. Rules evolve based on context. Cryptography enables coordination and communication.

  • Potential uses are showcasing public thinking, especially when younger. Format is similar to Twitter but focuses on knowledge sharing.

In summary, the stream covers issues around knowledge sharing, governance, social incentives, communication mediums, and the evolving technological landscape. There are musings on product development, note taking, and cryptographic coordination.

  • being an investor without operational capacities will be highly limiting

  • particularly as folks learn how to build cheaper

  • this was clear talking with more family offices

  • knowledge sharing is impeded

  • ran a number of experiments on this with folks in bay

  • number of social reasons why knowledge-sharing is tribally localized

  • “alpha” is one of the key things, where value capture becomes diluted when knowledge is more dispersed

  • the value capture information amortizes across time and number of recipients who can act on info

  • taking notes in public using streams like this is a new paradigm

  • it’s different from learning in public paradigm, which increases hurdle by one or two steps, which is synthesizing inputs and then articulating learnings

  • product tradeoff space is becoming much more appreciated, may be an unbundling toward more products catering smaller markets via more lifestyle-esque businesses

  • efficient note taking is super helpful

  • fuzzy search with RAM de-caching

  • ideally native on device and optional encryption

  • it’s surprisingly how looping through the landscape of note/knowledge-mgmt tools

  • leads back to a few core product prioritizations in the trade-off space

  • continuing from here

• the issue with the “feynman idea” of explaining things so that a child can understand something
• is that there’s often a tradeoff of whether to continue to go further beyond the inferential frontier
• while also bridging inferential tethers to those closer to the edge of the frontier, so there’s some semantical representations that can form connective tissue
• as a “testbed” for
“knowing the words” to explain what is learned beyond the frontier to a wider and wider group of people
• one way to resolve this is via a group of experts (professors, other PhDs, other investors, other founders, other scientists)
• that is the most common methodology
• another methodology is via consilience and treating various fields pointing to the same findings as an empirical basis for evidence
• the advantage here is being able to go further alone
• and then trace back with vertical experts across various fields
• this it’s possible to create “weighting of evidence” across fields across multiple frontier folks
• this reduces the burden of evidence for a single person or single group to confirm learnings/discoveries via wayfinding (referred in previous posts as discursive thought or research or venture backed startup)


  • here’s the issue: “to go fast, go alone; to go far, go together”

  • this tradeoff can be resolved by combining the above two methodologies

  • find experts in every domain/vertical

  • stream out your thoughts to them

  • let them know they have no expectation of response

  • and that they don’t even have to read, like a news feed, but they can respond if they want/like

  • now verify learnings across all the fields

  • you can scaffold evidence across various fields

  • for anything that is an output of multiple fields/perspectives

  • like our conscience co-created experience (governance, society, economics, finance, technology, etc)

  • the elimination of jargon is a tough problem in education and inferential bridging

  • the semantic/conceptual mapping is the easiest way to cross bridges

  • each concept/idea/inference that can be referred to and then “headnodded” at each other for confirmation of understanding

  • saves a couple of sentences of explanation that could have been shorthanded with a word (often jargon)

  • the issue with the “feynman idea” of explaining things so that a child can understand something

  • is that there’s often a tradeoff of whether to continue to go further beyond the inferential frontier

  • while also bridging inferential tethers to those closer to the edge of the frontier, so there’s some semantical representations that can form connective tissue

  • as a “testbed” for
    “knowing the words” to explain what is learned beyond the frontier to a wider and wider group of people

  • one way to resolve this is via a group of experts (professors, other PhDs, other investors, other founders, other scientists)

  • that is the most common methodology

  • another methodology is via consilience and treating various fields pointing to the same findings as an empirical basis for evidence

  • the advantage here is being able to go further alone

  • and then trace back with vertical experts across various fields

  • with this it’s possible to create “weighting of evidence” across fields across multiple frontier folks

  • this reduces the burden of evidence for a single person or single group to confirm learnings/discoveries via wayfinding (referred in previous posts as discursive thought or research or venture backed startup)

  • continuing discursive inferences based on the below set of inferences:

• since an operator can effectuate most valuable thoughts
• anything below a certain abstraction are actionable thoughts
• anything about a certain abstraction gets into the territory of philosophical
• the edge between philosophical and operation is an interesting juncture
• it likely is the point where the future is tilted toward various states (on axes of values)


  • there’s been multiple ways to describe the separation of “what is” and “what should be”

  • Hume’s Guillotine desribes this is-ought separation

  • this has been debated (though somewhat meanderingly) by jordan peterson and sam harris

  • the issue with debates amongst thinkers and non operators

  • is the practical experience of tilting the state of the world is less immediately visceral


  • one distinction between neural nets and physical world

  • that is not widely understood

  • is that there’s not actually a clear distinction

  • for example if an LLM says to a human over a phone to move the cup in front of them

  • and the human moves the cup

  • the “gap space”

  • between hardware/software/energy/bits has traversed beyond into the “physical realm”

  • of course this becomes much more clear once neural nets have more “bodily limbs” via robotics


  • the issue arises here where memes and values

  • will more directly affect physical state

  • with the rise of AIs that can direct humans

  • and the rise of robots that can change physical state

  • scarcity is a function of input m.e.a.t costs

  • if u calculate the flowthrough costs for many things, a majority is often the cost of humans

  • when humans are removed from the cost side of equation (for example if more robots gather raw materials and resources and convert them into purchaseable goods)

  • the cost of goods enter drastic deflation

  • however Baumol’s cost disease shows otherwise so far

  • for more highly regulated industries like healthcare, real estate, and education


  • this generally points

  • to a world where changes of physical state occurs much faster

  • where the time distance from thought-desire to actuality happens more quickly, unless there’s another parties’ counteracting thought-desire

  • simply put, if someone wants something they will likely be able to get it more quickly, unless someone else somehow does not want it to occur (cancel culture)

  • another factor will be increasing leisure time for more and more people

  • an inflation-immune way of looking at costs is “time cost” of everything

  • how much “time cost” did it take to make and get a banana, a fridge, etc

  • without the benefits of comparative advantages of currency exchange for free market produced goods and services?

  • time cost will continue to go down with automation and AI

  • leisure time will go up

  • tribal jockeying for what happens is physical state will occur

  • much of this is will not be settled via bodily warfare (physical harm)

  • and move upstream to financial, economic, and reputational “attacking”

  • this includes taxes, tariffs, namecalling, which could affect reputation which could affect future flow of funds (economics)


  • there’s the idea that economics is the study of the “territory”

  • which is the current state of account balances

  • and finance is the study of the “map”

  • which is how the flow of funds will move in the future from account to account

  • discounted cash flows are simply projections brought to today, for how much future flow of funds will accrue to a given account (company or other)

  • discursive thought is highly underrated

  • it’s one of the bedrocks of creativity

  • the key algorithm is what decides to “fund more scarce m.e.a.t resources” to continue disgressions into continuation from inference to inference

  • it’s poorly understood, and can be finetuned to be higher EV and understand when to continue vs. cutting off the compute loop

  • this can be understood well with agents as well

  • the ephemeral items is the sense of EV and the awareness of whether to continue “feeding the loop”

  • it’s a sad state of progess to not be able to discuss this at various levels of abstraction (m.e.a.t formation, m.e.a.t deployments in venture, scientific research, progress research into development, what some have termed meta-science)

  • another issue is that crossing inferences is too slow

  • people cannot read or go through 100+ inferences

  • there’s sometime embedded experiential knowledge at certain points of inferences

  • a clear litmus test is take a laymen (non-experts) and bridge multiple persons across the inferences

  • this takes work and is very much an empirical process

  • since it’s a skillset of communication and finding shorthand for each specific person based on their backgrounds and conceptual mapping

  • though it’s certainly possible

  • the professionalized ecosystem versions of these are VC-startup ecosystem and PHd-professor ecosystem

  • both are incentivized by their respective status symbols to orient people “learning” and crossing inferential bridges (for VCs power law MOIC outcomes and for professors tenure and published papers)

  • these more “concrete tangible examples” of discursive experiments

  • that happen as halting functions

  • with a “market eval” (either VC funding or professor time/attention and then peer review)

  • happen within one’s own mind for trails of thought/inferences as well

  • neuro divergence is one phrasing

  • inferential distance or inferential aloneness is another semantic descriptor

  • consensus formation from inferences beyond the edge of normative society

  • likely we’ll have more of this

  • where does that lead us?

  • thiel has made the point: is freedom possible in a democracy?

  • pointing to -> majority always be able to coerce the minority

  • also the idea where:

  • 50-60% consensus, you’re in a democracy

  • 60-70% consensus, you’re in Singapore

  • 90-100% consensus, you’re in North Korea


  • consensus formation

  • is a function of aligned tribes

  • without the option to exit

  • tribal jockeying begins

  • usually this boils down to the allocation of scarce resources

  • likely the line between operator and investor will continue to dissolve

  • from a m.e.a.t deployment perspective the tension certainly dissolves

  • allocation and returns occur at all levels of scarcity

  • investments can compound across money.energy.attention.time as well

  • streams are one of the few ways to drop thoughts above a certain abstraction

  • allows for some RAM decaching

  • some of the higher-abstraction thoughts are close enough to operational knowledge that they can be shared internally to influence product or direction or eng or pri

  • but not important enough where it’s ok to be shared publicly

  • there’s not many purebred operators that are writing or posting frequently

  • it’s usually much less worth it to write publicly as an operator

  • more useful as an investor

  • since an operator can effectuate most valuable thoughts

  • anything below a certain abstraction are actionable thoughts

  • anything about a certain abstraction gets into the territory of philosophical

  • the edge between philosophical and operational is an interesting juncture

  • it likely is the point where the future is tilted toward various states (on axes of values)

  • hm it’s not clear atm how much it’s possible to do just streaming data into an AI to increase its quality

  • RAG potential and meaningspace vector search not quite clear

  • likely it’s difficult to measure this or have a good sense without a lot of experiential feel through many iterated interactions

  • oddly the iteration space is extemely high here

  • there’s likely many teams in AI that are at respective frontiers, feeling out the experience directly

  • currently both the bit rate of communication and the formatting of communication are very poor, it’s likely highly misunderstood how much learning possibilities have shifted since a year or so ago
  • this is definitely a lower overhead way of interacting with the internet

  • usually it takes more time to seamlessly connect inferences

  • though it’s relatively straightforward to do so

  • would like a way to compile the information into an AI that is a secondary means of interacting with content

  • linearly laying out inferences based on the author’s perception of lower common denominator sequencing is archaic

  • many spiritual books employ Q&A format

  • since it allows any person along any point in the inferential chain to start anywhere (“meet the person where they are”)

  • are political problems reducible to technological problems?

  • self governance, politics as the consensus mechanism for power

  • power in governance boils down to application of incentives, physical/military, financial/taxation/fines/sactiond, and social/economic (“cancelling”

  • freedom to trade and transact, freedom to exist

  • important for writing to have clean logging systems

  • a blockage downstream propagates upstream

  • any blockage propagates

  • reconciling paradoxes

  • centuries long computes of humanity

  • unresolved spectrums

  • communism and capitalism

regulation by enforcement

is a statistical negative potential incentive

applied to actors

when they are considering what to do

and information is seeped in each “decision” by an agency

or a court

the US applies the rough framework of:

an agency enforces

a court interprets

a congressional legislates laws

however many laws

always leave for interpretation

the battleground for the “truth” of what is allowed and not allowed

along with the relevant penalties
lies in courts and litigation

for example

for the crypto vs sec scenario

it is often portrayed as “bad crypto” vs “bad gensler/sec”

however both parties are simply doing the things that either one must do as

an interpretation of mandate

or

within the interest of a group

discretization occurs when there are many conflicting decision makers within a company, group, or agency

when the in-group makes a decision

the “consensus mechanism”

for weights in determining a choice or action pathway between A, B, C etc

may be vanishingly small
51% for choice/pathway A va for 49% choice/pathway B

after an in-group decides internally

many times the in-group wants to provide a united front

when presenting the decision to out-groups

so the out-groups never know how hotly or deeply contested a decision was once the in-group decides what to do and out-groups become aware of the decision

an SEC decision regarding crypto is notably always followed by a dissenting Hester opinion

hence in what many may interpret as the evil SEC disregarding rule of law or making dumb enforcement decisions

may just be a form of strategic ambiguity

to cast greyness into a free zone of regulatory non-clarity

to buy time

so that politicians and congressional lawmakers

can figure out how to set up rules and incentives

for an ever expanding and evolving area

that nets out into how top-down paternalistic government ought to be

vs bottom-up laissez faire

the core crux in question boils down to

do i let the children sort out any problems or conflicts

or do i get involved?

these are not centuries-long questions and even millenia questions

these challenges of parental intervention vs non-intervention

are threaded through parables, mythologies, religious texts

what many fail to notice is that there are questions with no resolved answers

yet circumstances dictate discrete decisions, actions, rules, laws, incentives, enforcements, interpretations and so forth

what many people don’t understand about how

the world plays out

amongst a field of many actors and players

is that the inherent discretization of actions or “moves”

may look confusing or “interest group based”

however when one transcends upwards and views down the culmination

of many actions by many actors

and how the resulting outcome

or actuality as a result of various adversarial “neural nets”

is a form of limited computation

to view what may seem like discrete actors taking discrete actions

that seem to conflict

when in actuality

actors are simply computing or reconciling

computations

along a spectrum or axes of unclear outcomes/decisions

social reality is a network of incentives

applied at the at the social layer panning out in reputational and economic incentives

cancelling someone is a derivative of attacking an economic abstraction of someone in m.e.a.t terms

m.e.a.t is shorthand for money.energy.attention.time

m.e.a.t warfare like sanctions, taxes, and tariffs

are a means to oscillate between discrete violence (killing and bodily harm) and continuous violence (damaging reputation, economics, or current and future m.e.a.t)

modulating between the two creates an evolving battleground for social reality

to negotiate norms and “rules”

thinking is contextual

a downstream blockage propagates upstream

an inferential desert created a mimetic death valley

inferential aloneness can be fostered

inferential pathways can be staggered

inferences can be bridged

isolated reality reified into social reality

communication evolves based on context

context evolves based on incentives

incentives evolve based on rules

rules evolve based on enforcement

enforcement evolve based on status and position

status and position evolve based on primitives

some primitives are monopolization of military, canonical property rights, and creation of monies

encrypted coordination occurs in adversarial contexts where the nature of game theory evolves

clear communication vs encrypted communication

coordination via contextual maps

shibboleths reveal a pathway

cryptographic statements with a partial multi-sig decryption based on inferential pieces to the puzzle

boom mystical statements with no clear meaning but may be referred back to as a breadcrumb drop for retrospective analysis

it’s not clear what the use case of this is

partly it’s a way to showcase thinking publicly

this may be more useful when younger

potentially less useful when older

it’s not clear how big of a difference this is from twitter