A.I.
More than "Artificial Intelligence"
Work in progress, this is too disorganized
Reductionism and fear are not useful tools for understanding anything, much less predicting the future.
People like to extrapolate straight lines, and lately do so with log-scale graphs. They are inspired by Moore's Law, which has indeed worked that way because there is plenty of room at the bottom for making transistors smaller, and smaller transistors can be faster transistors.
However, programmers do not scale similarly. Their productivity grows slowly, and may not be growing at all if the time cost of hubristic failed products are added to the successes. If some of the the so-called successes (Windows 10, Gnome 3) are measured by customer rather than programmer value, they may actually belong among the failures. We are not producing lines of useable code a billion times cheaper than we were in 1980, as we are with transistors, and we have not learned to quantify customer value in ways that deterministically drive software design.
So what? Lines of code and transistors are not intrinsically valuable; value is measured in human terms. Money buys human attention, whether that is personal attention from a doctor or a restaurant waiter, or the engineering and marketing attention that goes into producing millions of identical products. If more automation is used to make a product, then the price may drop as many similarly equipped producers compete for customer dollars. Alternately, the same price may buy better engineering and higher precision production machinery. Customers (who earn money for their attention, more if their attention is more valuable) buy more product value for the same dollars, and ultimately end up consuming the same amount of the attention of other humans. Wealthier customers often buy customized products, higher value widgets that more precisely match their individual needs, freeing more time and attention for what those customers produce rather than adjusting to what they buy.
The electronics revolution began with the telegraph: a way to deliver messages by electrical rather than mechanical (horse and messenger) power. With the invention of the mechanical calculator, machine power began to manipulate symbols, not merely transmit them. With automation-assisted manufacturing, symbols began to manipulate machines, closing the circle. This suggests a process that can grow exponentially, until you realize that humans still define the value of the inputs and outputs, and this human definition process, both by producers and consumers, is the reason for all the bother, not the machines themselves.
Imagine automation as an "innovation amplifier", allowing the same amount of design intention to produce a wider range of customer-centric products, perhaps single copies of products that are exactly matched to individual customer needs. That frees up more attention to discover even better adaptations. Yes, an automated assembly line can churn out a million identical objects faster and cheaper and more accurately than human workers can, but can it autonomously innovate changes that anticipate the needs of specific individuals? Not just colors and add-ons, but de-novo collaborations with individual customers? Before the mass production revolution, that is what artisans actually did.
Rather than complete the process of reducing humans to cogs in a Soviet-style mass-production mass-consumption everyone-the-same machine, we can use machines to make everyone unique, everyone an extension of the possibility space, everyone a prodigeous generator of surprise (what Shannon channel capacity is actually a measure of). We can use machines to translate between all those heterogeneous surprise generators, maximizing the flow of value between them. We can even translate between human and animal, human and plant, human and the microscopic and the macroscopic. Humans may be nodes in a gigantic graph, but we can connect that graph to many other participants.
Metcalf's Law
In 2016, the end product of most transistors, software, and automation is communication, some mix of audio and visual telecommunicator that connects individuals. Some of the communication is "one to many" - forms of entertainment, for example - but the most valuable is one-to-one. Even with entertainment, the "many" is shrinking - cheap electronics and unlimited channels allow many of us to bypass aggregation and interact with smaller audiences. This is putting information monopolies out of business, not amplifying their mass production power and eliminating competition.
Metcalf's "law" describes the value of a network as proportional to the square of the number of nodes. Today, this is mostly unrealized value; in the abstract, you can dial 16 digits or less and talk to anyone else in the world, but most of them will talk back in Chinese or Spanish or Arabic. Even if you speak the same language, no valuable communication will result - you will waste at least two people's time and attention. But what if your phone could translate your calls, and help you have the most valuable conversation possible with anyone in the world? Perfect strangers who, by uncanny chance, have something to tell you Right Now that can significantly improve your life, as you can improve theirs?
Extrapolate further - what if your machines enabled a multinode mesh of simultaneous conversation, many participants perceiving and behaving as if in familiar dyadic conversations, but actually distributing speech like packets to many different listeners? Now we have an exponentially more valuable network than Metcalf's dyadic pairs.
More Later.
What is A? What is I?
"Artificial Intelligence" is a combination of two groups of concepts. Artifacts (including objects, images, or symbol streams made by artifice or artistry) and Intellect: adaptability, survivability, knowledge, skill, understanding, communication, location, identification, description, discovery, teaching, learning, accomplishment, appreciation, categorization, generalization, collaboration, socialization, creativity, imagination, prediction, adaption, error detection, threat avoidance, ... The ellipsis represents a vast range of other behaviors, some discovered only recently, many waiting to be discovered, many emerging de novo in response to new situations. Human brains (the reigning exemplar of intelligence) are plastic, self-constructing, and self-programming. It is theoretically possible that logic and memory can perform similar functions, but Ricardo's law of comparative advantage suggests that even if that mechanisms can outperform humans on every axis of this ridiculously multidimensional space, the relative advantage on each axis will scale differently, so specialization and trade is rewarded.
An example of Ricardo's Law in historical context: Assume the only product we care about is pairs of shoes, the more the better. Jane can produce 100 left shoes per day, or 100 right shoes per day, or 50 pairs. One-armed Jack can produce only 30 left shoes or 15 right shoes per day, or 10 pairs (1/3 of his day making left shoes, 2/3 making right shoes). If each is working alone, the total production is 60 pairs, Jane gets 50 pairs and Jack gets 10 pairs. But what if Jack makes only left shoes, 30 of them, and gives 18 left shoes to Jane in return for 12 right shoes? Jane produces 35 left shoes and 65 right shoes, and ends up with 53 pairs. Jack ends up with 12 pairs. Total production, 65 pairs. Jane gains three more pairs out of the trade (a 6% increase), Jack gets two more pairs (a 20% increase), both benefit. Who benefits most? Jane gets more extra pairs from the trace, but a smaller relative benefit, so Jack may be more motivated to accept this "unfair" trade than Jane is. If Jack insists on 15 right shoes for 15 left shoes, Jane gets 50 pairs either way, so why bother with the trade?
In a multidimensional "intelligence" market, a machine can manufacture many things, but it will probably be "least good" at rewiring itself for fast built-in response to stimuli, and "most good" at high speed long distance communication. Humans move data fastest with semaphores, but many humans and much energy is required, while a fiber optic (intelligent or not) can move data at about 60% of the speed of light. OTOH, a mask-and-chip-fab cycle may take a week in an efficiently-scheduled factory, while brains can grow new axons in hours. Even if the fab cycle sped up (this is costly, many aspects do not parallelize easily), Ricardo's Law suggests that the digisphere should weight itself towards communication, and brains should weight themselves towards rewiring, and that both sides of the exchange will benefit.
But where do we get the notion of "sides" from, anyway? We are already cyborgs, combinations of machine and meat; my dumb little computer is doing tasks any self-respecting AI would consider trivial, but are much faster than I can do them. Such as, turning my key presses into pixels on a screen that resembles typewriter character output, storing the characters on disk, automatically sending them for you to read when you ask your computer for them. And that is using a 7 year old laptop computer and screen. Advances in technology will permit me to send the control signals out of my brain much more rapidly and accurately than fingers, perhaps creating pictures and adding citations and counting shoes in cheesy examples much more rapidly. The productivity of "me" plus "my intimate electronics" will continue to improve as rapidly as external AI can - perhaps faster, because I am starting with a first mover advantage, and I am the one paying for the machines, so I am choosing what they do. Or more accurately, WE choose what WE do, as an integrated entity.