Technology: maximizing individual radius of lethality.

Sunday, February 5, 2023 

We like to look forward into the future by pattern matching against history, something human cognition does to reduce reality into space saving symbolic representation which leads us to see cyclic patterns in everything, even random noise. We also tend to allude to Luddites when talking about people with concern for the consequences of advances in technology, including AI, seeing a pattern of fear of novel technologies that so far hasn’t destroyed society and from this we take comfort that, so too, should the fears of AI become at some future time as laughable as fears of powered looms or telegraphs.

I am not so sanguine. While there are plenty of cycles to history: the seasons, feast and famine, periodic embrace and rejection of authoritarianism; there is also continuous trajectories that project into obvious limits: population growth limited by RuBisCO efficiency, the 38ZJ remaining of the 60ZJ battery capacity the carboniferous period so kindly charged up for us (also thanks to RuBisCO), the development of technology.

I find it convenient to think of technology as primarily a tool for amplifying an individual radius of lethality: while humans certainly enjoy non-lethal uses of technology, a primary driver has always been martial (whether marital is superior or subordinate to martial is a subtle question). Perhaps a tertiary fundamental purpose of technology is to reduce human inputs in consumable assets; the demand for such assets also likely having a limit. AI as we have it now has not existed before, this is novel, it is the current tip of the spear of a persistent and exponentially accelerating trend. There’s no known existence proof that technological advancement of a species is survivable.

We might consider a crude analogy  (cognizant of the risks and seductive allure of reducing complex systems to symbols) that by considering technology an amplifier of lethality there are other elements of the implied circuit such as feedback which might considered an analogue of the tendency of actualized lethality to engender a lethal response; and irreducible noise which might be considered an analogue of the tendency of a response distribution to consistent inputs resulting in some distributive tail of human response to abnormally embrace lethality as a response to benign inputs.  As technology advances, the gain of the system increases and while we might be familiar with an audio system feedback loop being, perhaps painfully, limited to amplifier clipping the only obvious limit to the maximum output of a technologically amplified lethality feedback loop is annihilation.

While AI as currently implemented shares no underlying mechanism by which we recognize sentience, it provides a fairly good illusion of it and it isn’t a given that there’s a meaningful distinction between the shadows of digital puppets and those of humans. We have a tendency to nostalgically cling to the assumption of some metaphysical value in what we hold to be true and legitimate or at least the output of labor intensive process, a distinction which comes and goes.

That AI might result in students escaping the mental rigors of learning recalls the (still ongoing) hand wringing over calculators, a fear which leans into the cyclic nature of history. On the other hand, human brains appear to be shrinking, likely as a consequence of intelligence being less selective in reproductive success, so the Luddites might have been right all along. Most of us do have a calculator with us at all times, despite what our grade school teachers might have said during arithmetic. While the progress of AI as a labor saving device reducing the energy consumption of our most extravagant organ might lead in time to meaningful changes in human capacity, it seems likely that devolution won’t get a chance to progress that far.

Posted at 10:37:06 GMT-0700

Category: Technology