Hand-made Minds in the "Digital" Age
Frank R. Wilson, M.D.,  Clinical professor of Neurology, Stanford University  

Conference address,  June 21, 2002

Today's educational world has been thrown into chaos by a worldwide and exceedingly sudden loss of the fundamental consensus that usually protects established institutions from all but the worst societal storms and upheavals. The economic, technological, political, and social foundations that support the teaching of young children today have become so entangled, contentious, and volatile that I believe no one can possibly pretend to know in any straightforward way what is best for children at this moment in time, or how best to teach them. object to this claim - after all, education has been in a state of chaos since the time of Plato. But I would argue that the cultural underpinnings of education have never been so unstable as they are now. As a consequence - and this is what is really most worrisome - the whole enterprise of education seems to have fallen into a state of agitated confusion. And it is not just teachers and parents who seem to have lost their bearings. Even the leaders of educational institutions have begun to question basic assumptions and to doubt their own ability to plan and to act. It is possible that the elegant French term "anomie" best describes this new psychological atmosphere.

However bad all this sounds, I doubt that the schools will be abandoned or that the teaching profession will become obsolete. Yes, there is a crisis, but crisis is good for us. After all, human intelligence evolved as a response not only to danger, but to the absolute certainty of uncertainty. Those who are vigilant and calm will find a way to thrive in the new environment.

When children draw by hand:  the beginning of making meaning.

From my perspective, the single most important actor in this crisis is the computer. Please note that I refer to the computer not as a culprit or the enemy but as an actor. The distinction is important because it is absolutely critical that we understand that the computer is a force whose real nature and potential (both good and bad) we simply do not understand very well yet. drama - or perhaps a comedy – in which a rich and overbearing cousin (about whom we know very little) suddenly arrives uninvited. Like Sheridan Whiteside, the infamous “Man Who Came to Dinner” in the classic American comedy by George S. Kaufman and Moss Hart, Mr. Computer is a loud, disruptive, and costly problem from the moment he walks through the door.

What do we do about him? Obviously we cannot ignore him. Do we throw ourselves out the back window to get away? Should we take him hunting or riding and stage a fatal accident and then send him back home in a wooden box? Or do we stand our ground and use our wits to turn the situation around? Before taking action, I think we have to decide what this play is about. Are we in a play about an alien intruder (the computer), or is it really a play about ourselves? 

Two American educators have written books that have helped me to remain calm as I watch education's great battle with the computer. The first book was by Barbara Maria Stafford, professor of art history at the University of Chicago. It is called “Artful Science - Enlightenment Entertainment and the Eclipse of Visual Education”. The second book was written by Shoshana Zuboff, professor of business administration at Harvard. It is called: “In the Age of the Smart Machine: The Future of Work and Power”.

Stafford's is a long and rather difficult book, but well worth the effort for anyone who is trying to make sense of the impact of computers on educational practice. Zuboff’s book is far easier to read but not at all concerned about education in any formal sense: it is about the effect of computers on the organization of labor and management. Ultimately, because computers have transformed the nature of work (and the qualifications demanded of people working at every level of large organizations) Zuboff's book actually is about education.

What these two books tell us is this: the rather strange play that we find ourselves in is a dramatic fantasy about the end of the European age of enlightenment. Here I must borrow from the philosopher Michel Foucault, who thought of enlightenment as a particular kind of personal self-awareness that became a defining theme of European thought over the past 500 years. Foucault accepted Emmanuel Kant's prediction that the age of enlightenment would be "the moment when humanity is going to put its own reason to use, without subjecting itself to any authority." For the poet Charles Baudelaire, the enlightened modern man "is not the man who goes off to discover himself, his secrets and his hidden truth, he is the man who tries to invent himself. This modernity does not 'liberate man in his own being’; it compels him to face the task of producing himself."

This is the kernel of the idea I want to explore now: that for at least 500 years it has increasingly been the highest intention of education not merely to equip but to compel the individual to produce himself or herself. The consequence of this process is that the individual becomes the authority behind his or her own actions. I hope to convince you that the hand has had, and almost certainly will continue to have, a central role in the implementation of this grand educational project of western civilization.

Just a little over 500 years ago, Europeans were in the early stages of a revolution that was every bit as unexpected and as destabilizing as the present computer and internet revolutions have proven to be. The printing press had been invented with the hope that books would become easier to publish and obtain. But when Martin Luther's list of questions about papal indulgences was copied, printed up, and sent around, the world suddenly became a place where a cat could not only look at a king, but could take his head off with a rhetorical question and a distribution list. The Protestant Reformation was, among other things, the grand opening of the information age, and Martin Luther was the world's first media celebrity.

Another event from the same period about which everyone knows a great deal also bears a somewhat odd resemblance to an event of our own time. After barely overcoming the intense skepticism of the Spanish court's mathematicians and astronomers, the 40-year-old son of a Genoese weaver persuaded Queen Isabella of Spain to make a highly risky investment in a maritime expedition into completely uncharted waters. Few people believed the trip could be safely made, and fewer still believed there was any chance of Columbus and his men returning safely, with or without riches from the East.

Finally, in this small trio of transforming events, a young Belgian named Andreas Vesalius (who had studied medicine in Paris and Padua) ended a nearly 2000-year-old taboo on human dissection when (in 1543) he published the world's first comprehensive textbook of human anatomy. His book is widely credited as having started the conversion of medicine from a loose collection of folk remedies and magic to a profession founded on scientific observation and analysis. Vesalius served the Court of Charles V, and since his time western civilization has accumulated a full five hundred years of turbulent, competitive, collaborative, sometimes euphoric and sometimes unspeakably tragic experience.

Gradually we have begun to understand that what sets humans apart from all other creatures on earth is our ability to define ourselves uniquely in our own time, where and with whom we live, through the exercise of our own bodies, minds, and spirits and through each other. Our vitality and our curiosity are the gift of our genes; our autonomy and, in Kant's terms, our authority, are the fruit of our labor.

So here we are in the year 2002. We are still Homo sapiens and we have for all practical purposes the same genes our earliest human ancestors had 200,000 years ago. And we are in a state of confused exhilaration and shock because suddenly life is being transformed too rapidly for us to adapt, so fast that we have no idea what to tell our children how to prepare themselves.

Five hundred years ago European civilization was thrown into chaos by the printing press, a sailor who ventured across an uncharted ocean, and a physician who needed to see more so that he could do more. In our time, we are overrun by computers and beholden to the Internet, we have been to the moon and back, and we have machines that can look into the body of a living person. Doesn't it sound oddly familiar? Haven't we already, to use a popular phrase, been there and done that? What are we worried about?

I think we are worried because we have allowed ourselves to believe that we humans owe our success to the human brain, and we have been told that the human brain has suddenly become obsolete – an artifact of pre-digital human civilization. That is why this play we are watching about the computer doesn't feel much like a comedy. We may not be watching a comedy but we are not watching a horror show either. We are just watching ourselves. Perhaps we should look not so closely at ourselves but at the landscape that stretches out behind the face we see every time we look into the mirror. It is the same landscape that stands behind every student who enters a classroom. on, every student who enters a classroom comes with quite a pedigree: he or she is no less than the present incarnation of millions of years of unbroken field-tested, continually evaluated and updated strategies for adaptation to an environment that has a consistent record of hostility to the incompetent.

A recognizable human pre-history began when advanced tree-dwelling primates started what is called the anthropoid suborder, and after 20-25 million years of experience with life in the trees – which is to say between 5 and 6 million years ago – as the climate changed and forests started disappearing, some of these animals moved to the ground. These were the hominids, descendants of the first anthropoids, or simians, to stand upright and walked on their feet.

As Darwin first commented 150 years ago, the upper limbs which had evolved in simians to provide for efficient locomotion and for foraging above ground were no longer needed to support body weight. For the first bipeds, who found themselves no longer needing the arms to walk along or swing under tree branches, evolution might have recognized this reduced functional demand in a number of ways. For example, reduced need for hand and arm skill could have led to a much smaller arm, perhaps something like what the kangaroo has. Alternatively, if the hominids could have survived on the ground by gathering fruits, nuts, and insects, our ancestors might have kept the arm as it was anatomically but surrendered the neurologic capacity for aerial gymnastics. That is approximately what happened to the gorillas, who have magnificent arms and hands that nevertheless are never used to manufacture or manipulate tools of any kind.

That the hominids did not become land-bound animals with withered arms or indolent lifestyles is almost certainly because they were small animals, and the habitat they entered was no palmy, fragrant Eden: it was a dangerous, sudden-death battleground dominated by powerful and hungry predators. In other words, our ancestors established a place for themselves because they found not less but more for the arm and hand and the brain to do. Indeed, based on what we now know of our own early history and the way that history is written into the human brain, our ancestors staked everything on skilled use of the arm and hand.

The commitment to that specific strategy, the reinvention of the arm and hand, probably began between 3 and 4 million years ago among families of small African apes living in southern and eastern Africa. The existence of the earliest known hominids is known to us because of the work of anthropologists who began searching for the fossil remains of early human ancestors in Africa's Great Rift Valley over 50 years ago. The most important of these fossils was the nearly complete skeleton of a female member of the species now known as Australopithecus afarensis. Nicknamed Lucy, this animal lived about 3.25 million years ago. She was about the size of the chimpanzee, she had a brain volume of 400 cc – about one-third the size of the human brain – she walked upright, and her wrist and hand were structurally unique.

AL-288-1 --- a.k.a. "Lucy" --- is the currently reigning hominid matriarch. She lived 3.2 million years ago.   Her hand is a mosaic of ancient and modern features.

Anthropologist Mary Mazke and Alan Alda discuss Lucy's new hand on Scientific American Frontiers.
What we have learned about her hand, largely through the remarkable studies of the American anthropologist Mary Marzke, is that a few small anatomic modifications would have had a dramatic functional impact on the capacity of the hand. The thumb was longer in relation to the fingers and the index and middle fingers could rotate at the knuckle joint. This change gave Lucy a new grip, referred to as the 3-jaw chuck – this is the same basic grip used by baseball pitchers. So Lucy would have been able to hold larger stones between the thumb and index and middle fingers, and could have thrown them overhand, just as a modern baseball pitcher can. She could even swivel her hips to accelerate the speed of a thrown projectile. What she probably lacked was a computational system to turn that entire novel manipulative potential into a ballistic system – so far as we can tell, she did not have the brain that would have transformed her hand and arm from a feeding and locomotor system to a weapons platform. But that is exactly what her descendants, over the next two million years or so, managed to develop.


And that is not all our post-Australopithecine ancestors managed to accomplish. One of the most interesting aspects of the hand story has to do with the remarkable tendency of biologic systems to continually modify and adapt whatever is already in place when it is beneficial to do so – small changes can lead to unintended and sometimes monumental innovations. For the hominids, the small changes in wrist and hand structure led to significantly improved throwing, and eventually to a complete reorganization of the brain. Being able to hit a target with a stone depends most of all upon the timing of its release. Whenever it was that Lucy or her descendants began to rely upon this new hand to launch rocks at prey or at other predators, they also started building a supremely precise clock to control the activity of muscles in the arm and hand. And once the hominids began to be sharpshooters, evolution began to play with hemispheric specialization in the brain. We can now say with considerable confidence that almost the entire set of distinctive human motor and cognitive skills, including language and our remarkable ability to design, build, and use tools, began as nothing more than an enhanced capacity to control the timing of sequential arm and hand movements used in throwing.

As far as anyone can tell, Lucy's family was the only living hominid species in Africa for nearly one million years, so perhaps she did try her hand at pitching. Without question, based on the archeological and anthropologic record and on the way the human brain is now configured, the australopithecines and their descendants came to depend increasingly on manual skills in their daily lives, and inevitably those who excelled at those skills increased their own chances of survival. Somewhere between 200,000 and 100,000 years ago the hand had reached its present anatomic configuration, the brain had tripled in size, tools were more elaborate, there was a complex society based on the organization of relationships, alliances, ideas, and work, and we started calling ourselves Homo sapiens.

Anthropologists remind us that the social structure of the hominids was profoundly influenced by the manufacture and use of tools. The hand had long been an instrument for social interaction among primates through ritual grooming, and with the advent of cooperative tool manufacture and use, skilled hand use became associated with an open-ended mix of diversity and specialization of skill, which over time greatly strengthened and enlarged the social foundations for human survival, as well as creating a need for greatly expanded and refined methods of communication. What evolved in hominid life, in other words, was not simply a new and clever use of the hand, but a life of shared trust and commitment, based on a realization that life for the individual takes its course and acquires its meaning from and within the community.

Let me briefly summarize what I have said so far: human intelligence, and in fact any human skill in both its abstract and its concrete sense, derives from a history that reaches back into the deep hominid and primate past. Skill is based on far more than muscles, nerves, joints, reflexes, and brain circuits; intelligence cannot be explained by quantifying synapses, or weight, or any other physical attribute of the brain alone. These behavioral potentials are part of a species adaptation that belong to the whole, almost indescribably long story of hominid and human evolution and represent the unfolding in individual humans of genetically based and culturally enhanced strategies for our survival. Trial and error, inventive tinkering and design, major innovations in brain operations to transform two hands into a cooperative pair, and to permit the hands to signal, teach, and even tell stories through mime and gesture all became part of the evolutionary success story of the hominids with their new hand and expanding brain.

As interesting as that story is, or might be to some of us, its importance extends well beyond the domain of historical and evolutionary conjecture. In fact, it has enormous implications for those of us who inherit the Homo sapiens hand-brain complex. The most important of these is that the hand is also a central focal point of individual human motor and cognitive development.

Each of us begins life primed to see the world, to learn from it, and to forge our own personal strategies for staying in and playing the game in rather particular ways. The human genome dictates that at the species level we are all the same, while at the individual level we are all different. How does this work? The British psychologist Henry Plotkin says it is done with what he has called “heuristics” – he means inherent, individually specific physical and mental capacities by which we tune into and react to important conditions or events in the environment. Plotkin uses the term “heuristic” – a word from classical Greek meaning a teaching device or strategy – because he wants us to understand that we are not only equipped to survive physiologically but to learn how to survive behaviorally in the face of both certain and uncertain dangers. Our personal genetic heritage predisposes each of us to excel in the learning and perfection of certain kinds of survival skills more than we do at others, with the resulting diversity adding strength to organized human communities. Beginning at the time of birth we explore the world, we pay attention to our own reactions to it, and we make our natural affinities a guide to the efforts we make to improve ourselves.

    Children building dulcimers that they will play them-
     selves rely on finger and hand movements available
     only to humans.
Children learn by being brought up in the company of parents, other adults, sibs and other children, from the toys they are given and the games they are taught to play, and from the behavior of an infinite variety of objects grasped and manipulated in their hands. They already understand how to learn from others, and how to teach themselves, by the time they reach school age, but schools can change the learning process for children in profound ways. Most of these are positive, but there are dangers as well; for example, the process can easily be divorced from family and community life, or the school can substitute an approved list of adult career goals for the child’s native curiosity as a prime mover behind his or her personal search for skill mastery and understanding.

There is no doubt that formally organized human education systems are remarkably good at accomplishing what they are usually intended to do, which is getting kids pre-programmed for success in the societies in which they are raised. And even though mistakes are made, because not even a perfect human plan for education could be perfectly executed, the randomness of the failures and deficiencies is a protection (as it always is in Darwinian systems). Sometimes what looks like a mistake or a failure turns out later not to have been, just as in all of biology.

Here I need to stop for a moment to return to the computer, and to say just a little about our experience with this technology as it is being used in schools in the United States. In the past five to ten years, American schools and the federal government have made a major commitment to the transformation of both educational environment and process based on what is commonly known as "wiring" the schools. About eight years ago, the Clinton administration decided that every child in school should have access to a computer and a connection to the internet, and that this should happen as soon as possible. So far, over $50 billion has been spent to reach that goal, and the level of annual spending is now about $8 billion per year. This amount does not include the cost of teacher training, the cost of computer maintenance, or the cost of upgrading software. Experts estimate that a minimum of $15 billion, or 5% of the total cost of public education, will have to be spent every year to meet the national goals for technology in education. Despite the enormous expense of this technology, the cost is accepted as necessary.

Why? First of all, almost everyone in academics sees computers as a boon to learning. In science education, for example, visual images are critical to conceptual learning, and the capacity of the computer to animate these images, and to interactively assist the student in grasping difficult principles is a tremendous advantage. And because of the ability to connect with the Internet, computers serve another fundamental educational goal by giving students access to information that would otherwise not be available to them, even in the best school library.

Second, and this point is very important to parents, computers will not just improve the teaching of traditional curriculum. Since it is widely assumed that every child will enter a computerized workplace, it is understood that computer literacy – which seems to mean the ability to understand and use computer software – must be acquired at an early age. The child who is unable to use computers will fall behind in school, and will be unemployable. This belief is now so deeply entrenched in American thinking that anyone who questions it publicly is considered a fool. School principals from all over the United States have told me that it is impossible to argue with parents about this point.

This process was well underway, and so widely seen by the public as an urgent necessity, that it was not until quite recently that anyone in the United States thought to ask whether it really is a good idea. And even if we are now beginning to hear public expressions of doubt, it is rare to hear thoughtful debate or serious criticism. Most American parents are still amazed that anyone could doubt the need to bring schools into the computerized, digitized, and Internet-dependent 21st century as quickly as possible.

But there are doubters, and some of them are people who have spent their entire careers in computer science and information technology. In an interview published in July, 1999 in the New York Times, Professor Mike Dertouzos at the Massachusetts Institute of Technology recalled a conversation he had had in 1998 with the former Prime Minister of Israel, Benjamin Netanyahu on this subject; Netanyahu had said to him:

“I want to connect all toddlers in my country. I have 300,000 toddlers under the age of 5 and I want to connect them, but I'm having trouble finding the money." Dertouzos, who is not a politician but a pioneer in computers and internet technology, said to the Prime Minister, "Mr. Netanyahu, why do you want to do that?" The reply from the Prime Minister, which simply reflected what everyone assumed to be true at that time: "Isn't it obvious?"

While it might surprise some people, Professor Dertouzos said, "No, it's not."

What is obvious is that computer and communications technology are entering the schools and will play a role in education. This does not guarantee that the transformation will be managed intelligently, economically, or with the children’s interests and needs in mind. For a number of reasons that we are just beginning to understand in the United States, the needs and interests of children are almost never seriously considered when the decision is made by schools to buy into the technological revolution.

The legendary father of Artificial Intelligence, Alan Turing, proved that a simple machine could be programmed to perform logical operations and he postulated that such a machine might even eventually be able to “think”. Turing could easily be a demon in my story, but I think this would be deeply unfair to him. At least as portrayed in John Casti’s wonderful new book, The Cambridge Quintet, Turing was just as interested as the famous Russian psychologist Nicolai Bernstein in the relationship between computation and movement, and in fact argued that the ideal thinking machine would move and would be equipped with all of the sensory capacities available to humans.

Unlike Turing, however, the cognitive world has come to see thinking and movement as separable, and the computational problem became the star attraction. So, whereas Turing wondered whether a computer could ever mimic human intelligence, at least some cognitive scientists – and I’m afraid, some educators – now wonder out loud if children can ever become as smart as computers. The flaw in this scare story is the same one made by anthropologists in the early part of this century, who thought that intelligence is simply a matter of the size of the computer (this is literally true: until the middle of this century it was widely held that the only difference between the earliest Homo brain and its predecessors was size, the dividing line between the australopithecines and Homo being drawn at 750 cc). No wonder some people believe that the optimal strategy for education is to seat one computer ready for data transfer – the child, that is – in front of another computer and to execute the download command.
When a child's hand sets an object in motion, a
lifetime of thinking takes flight.

Of course, there is nothing new about the educational fantasy behind our headlong rush to “wire” the classroom in order to intensify the child’s exposure to whatever information is available on the Internet. Paulo Friere, in Pedagogy of the Oppressed, cut straight through to the heart of this idea when he said:

The teacher’s task is to organize a process which already occurs spontaneously, to “fill” the students by making deposits of information which he or she considers to constitute true knowledge... The banking concept of education is based on a mechanistic, static, spatialized view of consciousness and it transforms students into receiving objects.
And, just as with real banking, real banking, computers – and the Internet – are an ideal way to get around the inherent expense, messiness, and unpredictability of face-to-face, live, human interaction. This technology affords the opportunity to realize a level of control over the process entirely unimagined just a few years ago. The “banking” theory of education is not at all new, and perhaps it is not all bad, either. But the delivery system now available to transform that theory into a prophecy is new and represents an experiment utterly without precedent in the history of education.

The former editor of the Harvard Education Newsletter, Ed Miller, has recently published a small but important article entitled “The Three M’s of Our Totally Wired Future” in Orion Magazine. In it, he strikes an optimistic note about the technologic revolution – and danger – in education, citing the Teachers College meeting.

A broad consensus emerged from this meeting of psychologists, physicists, historians, philosophers, and computer scientists – that we need to learn much more about technology’s potential for impeding the healthy development of children, especially young children, before we plunge headlong into further investment in computers. Teachers are often seen as the stumbling block in efforts to digitize education. They are ridiculed as “technophobes” who resist progress and innovation. In fact, teachers know more about technology and its limitations that they are generally given credit for. Many of our most effective and knowledgeable teachers are skeptical about the usefulness of computers in schools, and with good reason. But they have learned that speaking out against the technofaith has become a kind of heresy. Here’s my hope for the future that good teachers and their allies will find their voices. Already there are a few signs. The research community and some thoughtful journalists are beginning to take seriously what teachers have known for a long time: that they can never replace the “human dimension” – the teacher’s voice telling stories that feed the child’s imagination; the teacher’s helping hand helping the child’s to grasp the butterfly net; the teacher’s eye and heart that see, as no machine will ever see, the spark of recognition in the child’s face.

Another example of optimism: Charles Darwin rounds Cape Horn January 13, 1833 aboard HMS Beagle.  The Galapagos  discoveries not yet even imagined.
I can't say how much of Miller’s cautious optimism I share, but I have my own hope: that more teachers will credit the authority of biology in the design of formal learning situations provided to children. It seems abundantly clear to me that, because of the process of co-evolution, the hand enjoys a privileged status in the learning process, being not only a catalyst but an experiential focal point for the organization of the young child’s perceptual, motor, cognitive, and creative world. It seems equally clear that as the child comes to the end of the pre-adolescent stage of development, the hand readily becomes a connecting link between self and community and a powerful enabler of the growing child’s determination to acquire adult skill, responsibility, and recognition. This happens because of the hand’s unique contribution to experiences which associate personal effort with skill mastery and the achievement of difficult and valued outcomes.

And I have one additional hope. Working teachers are our society’s richest repository of experience and understanding about the needs and the potentials of children as learners. I hope they will credit their own authority and will make themselves heard in the ongoing debate about technology in education – if they do not, we and our children are in unprecedented danger.


1. Keynote address presented June 21, 2002 to A Crisis in Early Childhood Education: The Rise of Technologies and the Demise of Play, a conference convened by the department of Humanities and Human Sciences at Point Park College, Pittsburgh, PA. Subsequently published in the anthology, All Work and No Play: How Educational Reforms are Harming Our Preschoolers, Editor Sharna Olfman; Praeger, 2003.






return to Hands-on Learning page

return to Handoc.com home