Monday, April 13, 2009

on emergence and the wisdom of crowds

what is emergence? the billiard table: “Imagine a billiard table populated by semi-intelligent, motorized billiard balls that have been programmed to explore the space of the table and alter their movement patterns based on specific interactions with other balls. For the most part, the table is in permanent motion, with balls colliding constantly, switching directions and speed every second. Because they are motorized, they never slow down unless their rules instruct them to, and their programming enables them to take unexpected turns when they encounter other balls. Such a systems would define the most elemental forms of complex behavior: a system with multiple agents dynamically interacting in multiple ways, following local rule and oblivious to any higher level instructions. But it wouldn’t truly be considered emergent until those local interactions resulted in some kind of discernible macrobehavior. Say the local rules of behavior followed by the balls ended up dividing the tables into two clusters of even-numbered and odd-numbered balls. That would mark the beginnings of emergence, a higher-level pattern arising out of parallel complex interactions between local agents. The balls aren’t programmed explicitly to cluster in two groups; they’ve programmed to follow much more random rules: swerve left when they collide with a solidcoloured; accelerate after contact with the three ball; stop dead in their tracks when they hit the eight ball; and so on. Yet out of those low-level routines, a coherent shape emerges. Does that make our mechanized billiard table adaptive? Not really, because a table divided between two clusters of balls is not terribly useful, either to the billiard balls themselves or to anyone else in the pool hall. But like the proverbial Hamlet-writing monkeys, if we had an infinite number of tables in our pool hall, each following a different set of rules, one of those tables might randomly hit upon a rule set that would arrange all the balls in a perfect triangle, leaving the cue ball across the table ready for the break. That would be adaptive behavior in the larger ecosystem of the pool hall, assuming that it was in the best interest of our billiards system to attract players. The system would use local rules between interacting agents to create higher-level behavior well suited to its environment. Emergent complexity without adaptation is like the intricate crystals formed by a snowflake; itʼs a beautiful pattern, but it has no function....emergent behavior...growing smarter over time, and of responding to the specific and changing needs of their environment. In that sense, most of the systems weʼll look at are more dynamic than our adaptive billiards table: they rarely settle in on a single frozen shape; they form patterns in time as well as space. A better example might be a table self-organizes into a billiards-based timing device: with the cue ball bouncing off the eight ball sixty times a minute, and the remaining balls shifting from one side of the table to another every hour on the hour. That might sound like an unlikely system to emerge out of local interactions between individual balls, but your body contains numerous organic clocks built out of simple cells that function in remarkably similar ways. An infinite number of cellular or billiard-ball configurations will not produce a working clock, and only a tiny number will. So the question becomes, how do you push your emergent system toward clocklike behavior, if thatʼs your goal? How do you make a self-organizing system more adaptive? That question has become particularly crucial, because the history of emergence has entered a new phase in the past few years, one that should prove to be more revolutionary than the two phases before it. In the first phase, inquiring minds struggled to understand the forces of self-organization without realizing what they were up against. In the second, certain sectors of the scientific community began to see self-organization as a problem that transcended local disciplines and set out to solve that problem, partially by comparing behavior in one area to behavior in another... But in the third phase - the one that began sometime in the past decade... - we stopped analyzing emergence and started creating it. We began building self-organizing systems into our software applications, our video games, our art, our music. We built emergent systems to recommend new books, recognize our voices, or find mates. For as long as complex organisms have been alive, they have lived under the laws of self-organization, but in recent years our day-to-day life has become overrun with artificial emergence: systems built with a conscious understanding of what emergence is, systems designed to exploit those laws the same way our nuclear reactors exploit the laws of atomic physics. Up to now, the philosophers of emergence have struggled to interpret the world. But they are now starting to change it.” (Johnson, Steven. 2002. Emergence : The connected lives of ants, brains, cities, and software. 1st Touchstone ed ed. New York ; Toronto: Touchstone. p. 18-20) how can an emergent system be pushed towards adaptive emergence? More is different. This slogan of complexity theory actually has two meanings that are relevant to our ant colonies. First, the statistical nature of ant interaction demands that there be a critical mass of ants for the colony to make intelligent assessments of its global state. Ten ants roaming across the desert floor will not be able to accurately judge the overall need for the foragers or nest-builders, but two thousand will do the job admirably. “More is different” also applies to the distinction between micromotives and macrobehavior: individual ants donʼt “know” that theyʼre prioritizing pathways between different food sources when they lay down a pheromone gradient near a pile of nutritious seeds. In fact, if we only studied individual ants in isolation, weʼd have no way of knowing that those chemical secretions were part of an overall effort to create a mass distribution line, carrying comparatively huge quantities of food back to the nest. Itʼs only by observing the entire system at work that the global behavior becomes apparent. Ignorance is useful. The simplicity of the ant language - and the relative stupidity of the individual ants - is, as the computer programmers say, a feature not a bug. Emergent systems can grow unwieldy when their component parts become excessively complicated. Better to build a densely interconnected system with simple elements, and let more sophisticated behavior trickle up. (Thatʼs the reason why computer chip traffic in the streamlined language of zeroes and ones.) Having individual agents capable of directly assessing the overall state of the system can be a real liability in swarm logic, for the same reason you donʼt want one of the neurons in your brain to suddenly become sentient. Encourage random encounters. Decentralized systems such as ant colonies rely heavily on the random interactions of ants exploring a given space without any predefined orders. Their encounters with other ants are individually arbitrary, but because there are so many individuals in the system, those encounters eventually allow the individuals to gauge and alter the macrostate of the system itself. Without those haphazard encounters, the colony wouldnʼt be capable of stumbling across new food sources or of adapting to new environmental conditions. Look for patterns in the signs. While the ants donʼt need an extensive vocabulary and are incapable of syntactical formulations, they do rely heavily on patterns in the semiochemicals they detect. A gradient in a pheromone trail leads them towards a food source, while encountering a high ratio of nest-builders to foragers encourages them to switch tasks. This knack for the pattern detection allows meta-information to circulate through the colony mind: signs about signs. Smelling the pheromones of fifty foragers in the space of an hour imparts information about the global state of the colony. Pay attention to your neighbors. This may well be the most important lesson that the ants have to give us, and the one with the most far-reaching consequences. You can restate it as “Local information can lead to global wisdom.” The primary mechanism of swarm logic is the interaction between neighboring ants on the field: ants stumbling across each other, or each otherʼs pheromone trails, while patrolling the area around the nest. Adding ants to the overall system will generate more interactions between neighbors and will consequently enable the colony itself to solve problems and regulate itself more effectively. Without neighboring ants stumbling across one another, colonies would be just a senseless assemblage of individuals organisms - a swarm without logic.” (Johnson, Steven. 2002. Emergence : The connected lives of ants, brains, cities, and software. 1st Touchstone ed ed. New York ; Toronto: Touchstone. p. 78)

"Not all crowds (groups) are wise. Consider, for example, mobs or crazed investors in a stock market bubble....According to Surowiecki, these key criteria separate wise crowds from irrational ones:

Diversity of opinion
Each person should have private information even if it's just an eccentric interpretation of the known facts.
People's opinions aren't determined by the opinions of those around them.
People are able to specialize and draw on local knowledge.
Some mechanism exists for turning private judgments into a collective decision."
(wikipedia, april 13, 2009, 9:24pm)

how can it go wrong? "Surowiecki studies situations (such as rational bubbles) in which the crowd produces very bad judgment, and argues that in these types of situations their cognition or cooperation failed because (in one way or another) the members of the crowd were too conscious of the opinions of others and began to emulate each other and conform rather than think differently. Although he gives experimental details of crowds collectively swayed by a persuasive speaker, he says that the main reason that groups of people intellectually conform is that the system for making decisions has a systematic flaw. Surowiecki asserts that what happens when the decision-making environment is not set up to accept the crowd, is that the benefits of individual judgments and private information are lost and that the crowd can only do as well as its smartest member, rather than perform better (as he shows is otherwise possible). Detailed case histories of such failures include:
Too homogeneous
Surowiecki stresses the need for diversity within a crowd to ensure enough variance in approach, thought process, and private information.
Too centralized
The Columbia shuttle disaster, which he blames on a hierarchical NASA management bureaucracy that was totally closed to the wisdom of low-level engineers.
Too divided
The US Intelligence community, the 9/11 Commission Report claims, failed to prevent the 11 September 2001 attacks partly because information held by one subdivision was not accessible by another. Surowiecki's argument is that crowds (of intelligence analysts in this case) work best when they choose for themselves what to work on and what information they need. (He cites the SARS-virus isolation as an example in which the free flow of data enabled laboratories around the world to coordinate research without a central point of control.)
The Office of the Director of National Intelligence and the CIA have created a Wikipedia style information sharing network called Intellipedia that will help the free flow of information to prevent such failures again.
Too imitative
Where choices are visible and made in sequence, an "information cascade"[2] can form in which only the first few decision makers gain anything by contemplating the choices available: once past decisions have become sufficiently informative, it pays for later decision makers to simply copy those around them. This can lead to fragile social outcomes.
Too emotional
Emotional factors, such as a feeling of belonging, can lead to peer pressure, herd instinct, and in extreme cases collective hysteria." (wikipedia, april 13, 2009, 9:24pm)
what can/can't crowds do? "Crowds are best when there's a right answer to a problem or a question. (I call these "cognition" problems.) If you have, for instance, a factual question, the best way to get a consistently good answer is to ask a group. They're also surprisingly good, though, at solving other kinds of problems. For instance, in smart crowds, people cooperate and work together even when it's more rational for them to let others do the work. And in smart crowds, people are also able to coordinate their behavior—for instance, buyers and sellers are able to find each other and trade at a reasonable price—without anyone being in charge. Groups aren't good at what you might call problems of skill—for instance, don't ask a group to perform surgery or fly a plane."(the widsom of crowds, april 13, 9:52 pm) what i think: can architecture help the group of building users follow these rules? more is different / a diversity of opinion ignorance is useful / independence look for patterns in the signs pay attention to your neighbours or aggregation decentralization NOT too homogeneous NOT too centralized NOT too divided NOT too imitative NOT too emotional and why should it? to solve cognitive problems to coordinate behavior NOT to solve a problem of skill is this my thesis? please say it is my only wish is for a thesis...a thesis...a thesis...

No comments:

Post a Comment