Creatures of Comfort is looking for our next set of superstar interns for 2013. Interns work with the small staff at our New York design studio on a variety of aspects of clothing design, production, and management.
Drawings by French teenagers from Art Enfantin n°84
February, March, April 1977
Jean Paul Gaultier, 1984.
Creatures of Comfort is looking for our next set of superstar interns for 2013. Interns work with the small staff at our New York design studio on a variety of aspects of clothing design, production, and management.
The digital pioneer and visionary behind virtual reality has turned against the very culture he helped create
By Ron Rosenbaum
Smithsonian magazine, January 2013.
I couldn’t help thinking of John Le Carré’s spy novels as I awaited my rendezvous with Jaron Lanier in a corner of the lobby of the stylish W Hotel just off Union Square in Manhattan. Le Carré’s espionage tales, such as The Spy Who Came In From the Cold, are haunted by the spectre of the mole, the defector, the double agent, who, from a position deep inside, turns against the ideology he once professed fealty to.
And so it is with Jaron Lanier and the ideology he helped create, Web 2.0 futurism, digital utopianism, which he now calls “digital Maoism,” indicting “internet intellectuals,” accusing giants like Facebook and Google of being “spy agencies.” Lanier was one of the creators of our current digital reality and now he wants to subvert the “hive mind,” as the web world’s been called, before it engulfs us all, destroys political discourse, economic stability, the dignity of personhood and leads to “social catastrophe.” Jaron Lanier is the spy who came in from the cold 2.0.
To understand what an important defector Lanier is, you have to know his dossier. As a pioneer and publicizer of virtual-reality technology (computer-simulated experiences) in the ’80s, he became a Silicon Valley digital-guru rock star, later renowned for his giant bushel-basket-size headful of dreadlocks and Falstaffian belly, his obsession with exotic Asian musical instruments, and even a big-label recording contract for his modernist classical music. (As he later told me, he once “opened for Dylan.” )
The colorful, prodigy-like persona of Jaron Lanier—he was in his early 20s when he helped make virtual reality a reality—was born among a small circle of first-generation Silicon Valley utopians and artificial-intelligence visionaries. Many of them gathered in, as Lanier recalls, “some run-down bungalows [I rented] by a stream in Palo Alto” in the mid-’80s, where, using capital he made from inventing the early video game hit Moondust, he’d started building virtual-reality machines. In his often provocative and astute dissenting book You Are Not a Gadget, he recalls one of the participants in those early mind-melds describing it as like being “in the most interesting room in the world.” Together, these digital futurists helped develop the intellectual concepts that would shape what is now known as Web 2.0—“information wants to be free,” “the wisdom of the crowd” and the like.
And then, shortly after the turn of the century, just when the rest of the world was turning on to Web 2.0, Lanier turned against it. With a broadside in Wired called “One-Half of a Manifesto,” he attacked the idea that “the wisdom of the crowd” would result in ever-upward enlightenment. It was just as likely, he argued, that the crowd would devolve into an online lynch mob.
Lanier became the fiercest and weightiest critic of the new digital world precisely because he came from the Inside. He was a heretic, an apostate rebelling against the ideology, the culture (and the cult) he helped found, and in effect, turning against himself.
And despite his apostasy, he’s still very much in the game. People want to hear his thoughts even when he’s castigating them. He’s still on the Davos to Dubai, SXSW to TED Talks conference circuit. Indeed, Lanier told me that after our rendezvous, he was off next to deliver the keynote address at the annual meeting of the Ford Foundation uptown in Manhattan. Following which he was flying to Vienna to address a convocation of museum curators, then, in an overnight turnaround, back to New York to participate in the unveiling of Microsoft’s first tablet device, the Surface.
Lanier freely admits the contradictions; he’s a kind of research scholar at Microsoft, he was on a first-name basis with “Sergey” and “Steve” (Brin, of Google, and Jobs, of Apple, respectively). But he uses his lecture circuit earnings to subsidize his obsession with those extremely arcane wind instruments. Following his Surface appearance he gave a concert downtown at a small venue in which he played some of them.
Lanier is still in the game in part because virtual reality has become, virtually, reality these days. “If you look out the window,” he says pointing to the traffic flowing around Union Square, “there’s no vehicle that wasn’t designed in a virtual-reality system first. And every vehicle of every kind built—plane, train—is first put in a virtual-reality machine and people experience driving it [as if it were real] first.”
I asked Lanier about his decision to rebel against his fellow Web 2.0 “intellectuals.”
“I think we changed the world,” he replies, “but this notion that we shouldn’t be self-critical and that we shouldn’t be hard on ourselves is irresponsible.”
For instance, he said, “I’d been an early advocate of making information free,” the mantra of the movement that said it was OK to steal, pirate and download the creative works of musicians, writers and other artists. It’s all just “information,” just 1’s and 0’s.
Indeed, one of the foundations of Lanier’s critique of digitized culture is the very way its digital transmission at some deep level betrays the essence of what it tries to transmit. Take music.
“MIDI,” Lanier wrote, of the digitizing program that chops up music into one-zero binaries for transmission, “was conceived from a keyboard player’s point of view…digital patterns that represented keyboard events like ‘key-down’ and ‘key-up.’ That meant it could not describe the curvy, transient expressions a singer or a saxophone note could produce. It could only describe the tile mosaic world of the keyboardist, not the watercolor world of the violin.”
Quite eloquent, an aspect of Lanier that sets him apart from the HAL-speak you often hear from Web 2.0 enthusiasts (HAL was the creepy humanoid voice of the talking computer in Stanley Kubrick’s prophetic2001: A Space Odyssey). But the objection that caused Lanier’s turnaround was not so much to what happened to the music, but to its economic foundation.
I asked him if there was a single development that gave rise to his defection.
“I’d had a career as a professional musician and what I started to see is that once we made information free, it wasn’t that we consigned all the big stars to the bread lines.” (They still had mega-concert tour profits.)
“Instead, it was the middle-class people who were consigned to the bread lines. And that was a very large body of people. And all of a sudden there was this weekly ritual, sometimes even daily: ‘Oh, we need to organize a benefit because so and so who’d been a manager of this big studio that closed its doors has cancer and doesn’t have insurance. We need to raise money so he can have his operation.’
“And I realized this was a hopeless, stupid design of society and that it was our fault. It really hit on a personal level—this isn’t working. And I think you can draw an analogy to what happened with communism, where at some point you just have to say there’s too much wrong with these experiments.”
His explanation of the way Google translator works, for instance, is a graphic example of how a giant just takes (or “appropriates without compensation”) and monetizes the work of the crowd. “One of the magic services that’s available in our age is that you can upload a passage in English to your computer from Google and you get back the Spanish translation. And there’s two ways to think about that. The most common way is that there’s some magic artificial intelligence in the sky or in the cloud or something that knows how to translate, and what a wonderful thing that this is available for free.
“But there’s another way to look at it, which is the technically true way: You gather a ton of information from real live translators who have translated phrases, just an enormous body, and then when your example comes in, you search through that to find similar passages and you create a collage of previous translations.”
“So it’s a huge, brute-force operation?” “It’s huge but very much like Facebook, it’s selling people [their advertiser-targetable personal identities, buying habits, etc.] back to themselves. [With translation] you’re producing this result that looks magical but in the meantime, the original translators aren’t paid for their work—their work was just appropriated. So by taking value off the books, you’re actually shrinking the economy.”
The way superfast computing has led to the nanosecond hedge-fund-trading stock markets? The “Flash Crash,” the “London Whale” and even the Great Recession of 2008?
“Well, that’s what my new book’s about. It’s called The Fate of Power and the Future of Dignity, and it doesn’t focus as much on free music files as it does on the world of finance—but what it suggests is that a file-sharing service and a hedge fund are essentially the same things. In both cases, there’s this idea that whoever has the biggest computer can analyze everyone else to their advantage and concentrate wealth and power. [Meanwhile], it’s shrinking the overall economy. I think it’s the mistake of our age.”
The mistake of our age? That’s a bold statement (as someone put it in Pulp Fiction). “I think it’s the reason why the rise of networking has coincided with the loss of the middle class, instead of an expansion in general wealth, which is what should happen. But if you say we’re creating the information economy, except that we’re making information free, then what we’re saying is we’re destroying the economy.”
The connection Lanier makes between techno-utopianism, the rise of the machines and the Great Recession is an audacious one. Lanier is suggesting we are outsourcing ourselves into insignificant advertising-fodder. Nanobytes of Big Data that diminish our personhood, our dignity. He may be the first Silicon populist.
“To my mind an overleveraged unsecured mortgage is exactly the same thing as a pirated music file. It’s somebody’s value that’s been copied many times to give benefit to some distant party. In the case of the music files, it’s to the benefit of an advertising spy like Google [which monetizes your search history], and in the case of the mortgage, it’s to the benefit of a fund manager somewhere. But in both cases all the risk and the cost is radiated out toward ordinary people and the middle classes—and even worse, the overall economy has shrunk in order to make a few people more.”
Lanier has another problem with the techno-utopians, though. It’s not just that they’ve crashed the economy, but that they’ve made a joke out of spirituality by creating, and worshiping, “the Singularity”—the “Nerd Rapture,” as it’s been called. The belief that increasing computer speed and processing power will shortly result in machines acquiring “artificial intelligence,” consciousness, and that we will be able to upload digital versions of ourselves into the machines and achieve immortality. Some say as early as 2020, others as late as 2045. One of its chief proponents, Ray Kurzweil, was on NPR recently talking about his plans to begin resurrecting his now dead father digitally.
Some of Lanier’s former Web 2.0 colleagues—for whom he expresses affection, not without a bit of pity—take this prediction seriously. “The first people to really articulate it did so right about the late ’70s, early ’80s and I was very much in that conversation. I think it’s a way of interpreting technology in which people forgo taking responsibility,” he says. “‘Oh, it’s the computer did it not me.’ ‘There’s no more middle class? Oh, it’s not me. The computer did it.’
“I was talking last year to Vernor Vinge, who coined the term ‘singularity,’” Lanier recalls, “and he was saying, ‘There are people around who believe it’s already happened.’ And he goes, ‘Thank God, I’m not one of those people.’”
In other words, even to one of its creators, it’s still just a thought experiment—not a reality or even a virtual-reality hot ticket to immortality. It’s a surreality.
Lanier says he’ll regard it as faith-based, “Unless of course, everybody’s suddenly killed by machines run amok.”
“Skynet!” I exclaim, referring to the evil machines in the Terminator films.
At last we come to politics, where I believe Lanier has been most farsighted—and which may be the deep source of his turning into a digital Le Carré figure. As far back as the turn of the century, he singled out one standout aspect of the new web culture—the acceptance, the welcoming of anonymous commenters on websites—as a danger to political discourse and the polity itself. At the time, this objection seemed a bit extreme. But he saw anonymity as a poison seed. The way it didn’t hide, but, in fact, brandished the ugliness of human nature beneath the anonymous screen-name masks. An enabling and foreshadowing of mob rule, not a growth of democracy, but an accretion of tribalism.
It’s taken a while for this prophecy to come true, a while for this mode of communication to replace and degrade political conversation, to drive out any ambiguity. Or departure from the binary. But it slowly is turning us into a nation of hate-filled trolls.
Surprisingly, Lanier tells me it first came to him when he recognized his own inner troll—for instance, when he’d find himself shamefully taking pleasure when someone he knew got attacked online. “I definitely noticed it happening to me,” he recalled. “We’re not as different from one another as we’d like to imagine. So when we look at this pathetic guy in Texas who was just outed as ‘Violentacrez’…I don’t know if you followed it?”
“I did.” “Violentacrez” was the screen name of a notorious troll on the popular site Reddit. He was known for posting “images of scantily clad underage girls…[and] an unending fountain of racism, porn, gore” and more, according to the Gawker.com reporter who exposed his real name, shaming him and evoking consternation among some Reddit users who felt that this use of anonymity was inseparable from freedom of speech somehow.
“So it turns out Violentacrez is this guy with a disabled wife who’s middle-aged and he’s kind of a Walter Mitty—someone who wants to be significant, wants some bit of Nietzschean spark to his life.”
Only Lanier would attribute Nietzschean longings to Violentacrez. “And he’s not that different from any of us. The difference is that he’s scared and possibly hurt a lot of people.”
Well, that is a difference. And he couldn’t have done it without the anonymous screen name. Or he wouldn’t have.
And here’s where Lanier says something remarkable and ominous about the potential dangers of anonymity.
“This is the thing that continues to scare me. You see in history the capacity of people to congeal—like social lasers of cruelty. That capacity is constant.”
“Social lasers of cruelty?” I repeat.
“I just made that up,” Lanier says. “Where everybody coheres into this cruelty beam….Look what we’re setting up here in the world today. We have economic fear combined with everybody joined together on these instant twitchy social networks which are designed to create mass action. What does it sound like to you? It sounds to me like the prequel to potential social catastrophe. I’d rather take the risk of being wrong than not be talking about that.”
Here he sounds less like a Le Carré mole than the American intellectual pessimist who surfaced back in the ’30s and criticized the Communist Party he left behind: someone like Whittaker Chambers.
But something he mentioned next really astonished me: “I’m sensitive to it because it murdered most of my parents’ families in two different occasions and this idea that we’re getting unified by people in these digital networks—”
“Murdered most of my parents’ families.” You heard that right. Lanier’s mother survived an Austrian concentration camp but many of her family died during the war—and many of his father’s family were slaughtered in prewar Russian pogroms, which led the survivors to flee to the United States.
It explains, I think, why his father, a delightfully eccentric student of human nature, brought up his son in the New Mexico desert—far from civilization and its lynch mob potential. We read of online bullying leading to teen suicides in the United States and, in China, there are reports of well-organized online virtual lynch mobs forming…digital Maoism.
He gives me one detail about what happened to his father’s family in Russia. “One of [my father’s] aunts was unable to speak because she had survived the pogrom by remaining absolutely mute while her sister was killed by sword in front of her [while she hid] under a bed. She was never able to speak again.”
It’s a haunting image of speechlessness. A pogrom is carried out by a “crowd,” the true horrific embodiment of the purported “wisdom of the crowd.” You could say it made Lanier even more determined not to remain mute. To speak out against the digital barbarism he regrets he helped create.
“Around the year 1910, a patient at State Lunatic Asylum No. 3 in Nevada, Missouri, who referred to himself as The Electric Pencil, executed 280 drawings in ink, pencil, crayon and colored pencil.”
Mask, green gneiss stone, 1928
Mask, cast concrete, 1929
The War Cripples, 1920
Lady with Mink and Veil, 1920
JUNE 23, 2011
New York Review of Books
Early this April, when researchers at Washington University in St. Louis reported that a woman with a host of electrodes temporarily positioned over the speech center of her brain was able to move a computer cursor on a screen simply by thinking but not pronouncing certain sounds, it seemed like the Singularity—the long-standing science fiction dream of melding man and machine to create a better species—might have arrived. At Brown University around the same time, scientists successfully tested a different kind of brain–computer interface (BCI) called BrainGate, which allowed a paralyzed woman to move a cursor, again just by thinking. Meanwhile, at USC, a team of biomedical engineers announced that they had successfully used carbon nanotubes to build a functioning synapse—the junction at which signals pass from one nerve cell to another—which marked the first step in their long march to construct a synthetic brain. On the same campus, Dr. Theodore Berger, who has been on his own path to make a neural prosthetic for more than three decades, has begun to implant a device into rats that bypasses a damaged hippocampus in the brain and works in its place.
The hippocampus is crucial to memory formation, and Berger’s invention holds the promise of overcoming problems related to both normal memory loss that comes from aging and pathological memory loss associated with diseases like Alzheimer’s. Similarly, the work being done at Brown and Washington University suggests the possibility of restoring mobility to those who are paralyzed and giving voice to those who have been robbed by illness or injury of the ability to communicate. If this is the Singularity, it looks not just benign but beneficent.
Michael Chorost is a man who has benefited from a brain–computer interface, though the kind of BCI implanted in his head after he went deaf in 2001, a cochlear implant, was not inserted directly into his brain, but into each of his inner ears. The result, after a lifetime of first being hard of hearing and then shut in complete auditory solitude, as he recounted in his memoir, Rebuilt: How Becoming Part Computer Made Me More Human (2005), was dramatic and life-changing. As his new, oddly jejune book, World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet, makes clear, he is now a cheerleader for the rest of us getting kitted out with our own, truly personal, in-brain computers. In Chorost’s ideal world, which he lays out with the unequivocal zeal of a convert, we will all be connected directly to the Internet via a neural implant, so that the Internet “would become seamlessly part of us, as natural and simple to use as our own hands.”
The debate between repair and enhancement is long-standing in medicine (and sports, and education, and genetics), though it gets louder and more complicated as technology advances. Typically, repair, like what those Brown, USC, and Washington University research teams are aiming to do for people who have suffered stroke, spinal cord and other injuries, neurodegeneration, dementia, or mental illness, is upheld as something good and necessary and worthy. Enhancement, on the other hand—as with performance drugs and stem cell line manipulation—is either reviled as a threat to our integrity and meaning as humans or conflated with repair until the distinction becomes meaningless.1
Chorost bounces over this debate altogether. While the computer in his head was put there to fix a deficit, the fact that it is there at all is what seems to convince him that the rest of us should become cyborgs. His assumption—it would be too generous to call it an argument—is that if that worked for him, this will work for us. “My two implants make me irreversibly computational, a living example of the integration of humans and computers,” he writes. “So for me the thought of implanting something like a BlackBerry in my head is not so strange. It would not be so strange for a lot of people, I think.”
More than a quarter-century ago, a science writer named David Ritchie published a book that I’ve kept on my bookshelf as a reminder of what the post-1984 world was supposed to bring. Called The Binary Brain, it extolled “the synthesis of human and artificial intelligence” via something he called a “biochip.” “The possibilities are marvelous to contemplate,” he wrote.
You could plug into a computer’s memory banks almost as easily as you put on your shoes. Suddenly, your mind would be full of all the information stored in the computer. You could instantly make yourself an expert in anything from Spanish literature to particle physics…. With biochips to hold the data, all the information in the MIT and Harvard libraries might be stuffed into a volume no greater than that of a sandwich. All of Shakespeare in a BB-sized module…. You may see devices like this before this century ends.
“Remember,” he says gravely, “we are talking here about a technology that is just around the corner, if not here already. Biochips would lead to the development of all manner of man-machine combinations….”
Twenty-six years later, in the second decade of the new millennium, here is Chorost saying almost the same thing, and for the same reason: our brains are too limited to sufficiently apprehend the world.2 “Some human attributes like IQ appear to have risen in the twentieth century,” he writes, “but the rate of increase is much slower than technology’s. There is no Moore’s Law for human beings.” (Moore’s Law is the much-invoked thesis, now elevated to metaphor, that says that the number of components that can be placed on an integrated circuit doubles every two years.) Leaving aside the flawed equivalences—that information is knowledge and facts are intelligence—Chorost’s “transmog” dream is rooted in a naive, and common, misperception of the Internet search engine, particularly Google’s, which is how most Internet users navigate through the fourteen billion pages of the World Wide Web.
Most of us, I think it’s safe to say, do not give much thought to the algorithm that produces the results of a Google search. Ask a question, get an answer—it’s a straightforward transaction. It seems not much different from consulting an encyclopedia, or a library card catalog, or even an index in a book. Books, those other repositories of facts, information, and ideas, are the template by which we understand the Web, which is like a random, messy, ever-expanding volume of every big and little thing. A search is our way into and through the mess, and when it’s made by using Google, it’s relying on the Google algorithm, a patented and closely guarded piece of intellectual property that the company calls PageRank, composed of “500 million variables and 2 billion terms.”
Those large numbers are comforting. They suggest an impermeable defense against bias, a scientific objectivity that allows the right response to the query to bubble up from the stew of so much stuff. To an extent it’s a self-perpetuating system, since it uses popularity (the number of links) as a proxy for importance, so that the more a particular link is clicked on, the higher its PageRank, and the more likely it is to appear near the top of the search results. (This is why companies have not necessarily minded bad reviews of their products.) Chorost likens this to Hebbian learning—the notion that neurons that fire together, wire together, since a highly ranked page will garner more page views, thus strengthening its ranking. [In this way]pages that link together “think” together. If many people visit a page over and over again, its PageRank will become so high that it effectively becomes stored in the collective human/electronic long-term memory.
Even if this turns out to be true, the process is anything but unbiased.
A Google search—which Chorost would have us doing in our own technologically modified heads—”curates” the Internet. The algorithm is, in essence, an editor, pulling up what it deems important, based on someone else’s understanding of what is important. This has spawned a whole industry of search engine optimization (SEO) consultants who game the system by reconfiguring a website’s code, content, and keywords to move it up in the rankings. Companies have also been known to pay for links in order to push themselves higher up in the rankings, a practice that Google is against and sometimes cracks down on. Even so, results rise to the top of a search query because an invisible hand is shepherding them there.
Junya Wantanabe FW 06/07 masks.
50 plays Download
“Van Gogh” read by Joe Brainard at St. Marks Church on March 31st, 1971.
From Selected Writings (New York: The Kulchur Foundation, 1971).
Samantha was sitting on a lawn chair in her parents’ garage, smoking a joint, when she decided to run away. She had just graduated from high school, where she had few friends, and felt invisible. She went to class stoned and wrote suicidal poems about the shame of being molested by a family friend: “why try when there is no hope / for my dirty soul there is no soap.” The thought of remaining in her home town, in central Florida, made her feel ill. Reclining in her chair in the brightly lit garage, she closed her eyes and thought, Is this going to be my life?
Samantha had got A’s in high school and had planned to escape to college, until she realized she couldn’t afford it. The only other option, she decided, was to flee. She wanted to go to Manhattan, which she’d never visited, because it seemed like a good place to meet other lesbians. Samantha enjoyed reading about botany and had long assumed that, like some plants, she was asexual, a self-sustaining organism. She found it trivial and unbecoming when girls at school pined over their crushes. Then, at fifteen, she watched “Lara Croft: Tomb Raider” and was uncomfortably captivated by Angelina Jolie. Her English teacher at the time had the students spend five minutes every day on an exercise called Vomit, in which they wrote down every phrase that occurred to them. Their pens could not stop moving. “In my fifty-millionth Vomit, I spaced out and wrote, ‘I’m a lesbian and no one knows,’ ” she told me. “It was this crazy voice that knew.”
Throughout the summer of 2009, Samantha researched the logistics of being homeless in New York, reading all the articles she could find online, no matter how outdated. She learned that if she went to a homeless shelter before she was eighteen social workers would be required to contact her family. She wanted nothing to do with her parents, who, she believed, hadn’t taken her complaints of sexual abuse seriously; her mother suggested it was a hallucination. Samantha planned to live on the streets for several weeks, until her eighteenth birthday. Then she would begin the rest of her life: getting a job, finding an apartment, and saving for college.
In a purple spiral-bound notebook, she created a guide for life on the streets. She listed the locations of soup kitchens, public libraries, bottle-return vending machines, thrift stores, and public sports clubs, where she could slip in for free showers. Under the heading “known homeless encampments,” she wrote down all the parks, boardwalks, and tunnels where she could sleep and the subway line she’d take to get there. Her most detailed entry was a description of an abandoned train tunnel in Harlem and the name of a photographer who had taken pictures of the homeless people who lived in it. She hoped that if she mentioned the photographer’s name she would be “accepted by the underground society.”
On September 5, 2009, she bought a Greyhound bus ticket using the name Samantha Green. (She has asked me not to use her legal name.) Her parents were away for the day, visiting friends, and she told her thirteen-year-old brother that she was leaving for New York. He expressed concern about her being homeless, but she reassured him. “It’s kind of like camping,” she said. Her brother, who had always treated her with reverence, agreed not to tell her parents where she was going. He helped her break into her father’s safe so that she could take her birth certificate. Then he drove her to Walmart, where she bought a durable backpack, a roll of duct tape, protein bars, multivitamins, a box of garbage bags, a canteen, and a jar of peanut butter.
Samantha’s parents came home six hours after she left and found a note on her bed: “I’m not coming back for a long time… . I am safe where I am.”
Samantha spent her first few nights in Central Park, sleeping under a pine tree. She wore the cargo pants, steel-toed Brahma work boots, and blue hoodie that she had left home in. She kept an open book by her side so that anyone passing by would assume she was a student who had drifted off. Using her backpack as a pillow, she slept lightly, alert to the sound of footsteps. More than any noise, she feared the buzz of police radios. She avoided thoughts of danger by embellishing them, imagining that her absence was of central concern to the police. She survived her first days in New York, she said, by “acting like I was in some sort of spy novel.”
For hours every day, she wandered around the city, memorizing street names and bus routes, observing how the neighborhoods changed depending on the time of day. Her favorite time was just before dawn, when the bars let out. She watched drunken tourists shout foolish things as they searched for cabs, and enjoyed knowing that, comparatively, she had her bearings. Rarely sleeping more than four hours a night, she was constantly looking for opportunities to close her eyes. One of her first discoveries was the Museum of Natural History, where the bathroom stalls were conveniently narrow. She could sit on the toilet, her head against the stall, until she was woken at the end of the day by the sound of the janitor’s mop.
By sharing cigarettes, she befriended other homeless kids, many of whom hung out at the Apple Store on Fifth Avenue. Their poverty wasn’t apparent—most of them had stolen at least one trendy outfit—but Samantha could spot them easily, because of their backpacks and the way they lingered near the least impressive computers. (The pictures in their Facebook profiles had shiny new laptops in the background.) On rainy nights, Samantha occasionally slept with them on the A, C, E subway line, which has the city’s longest route. They called it “Uncle Ace’s house.” One person would stay awake, on guard against cops or thieves; the rest napped until the end of the line.
Many of the kids knew each other from the youth shelters, a decentralized and temporary system that turns away far more people than it houses. The city has roughly two hundred and fifty shelter beds for some four thousand youth between the ages of thirteen and twenty-five who are homeless on any given night. This substratum of the homeless population has historically been overlooked. Until 1974, running away was a crime. The federal youth shelter system wasn’t established until the seventies, following an era in which homeless kids were seen as middle-class dropouts who would shortly return home. The media portrayed them as rebellious flower children in search of a countercultural utopia. According to a 1967 article in the Times, the crisis involved “thousands of young runaways, particularly girls, who are flooding the Village area to live as hippies.”
During the recent recession, the rate of unemployment for people between the ages of fifteen and twenty-four reached nearly twenty per cent, a record high. Samantha dropped off her résumé (which she printed at libraries) at dozens of fast-food restaurants, but having no job experience, and given her appearance—she had packed no change of clothes—she seldom got called for interviews. She tried to make money by recycling bottles, but older homeless people had cornered the market. Instead, she shoplifted. It was easy, because of her wholesome looks. Half Cherokee on her mother’s side, she had sharp cheekbones, high-arched eyebrows, and long, shiny hair. She targeted chain stores like 7-Eleven and Whole Foods: she’d steal a package of oatmeal from one and then use the microwave at the other.
In a journal stolen from Barnes & Noble, she kept a log of all the items she’d pocketed: Advil PM, beef jerky, “Practical Guide to Cherokee Sacred Ceremonies and Traditions,” four lesbian romances by Gerri Hill, Emergen-C, an exercise shirt, an onion bagel. “I started this log with the intention of paying all these stores back when I got back on my feet again,” she wrote on the second page. “I now know that’s impossible.”
Shoe designed by Pablo Picasso and fabricated by Perugia, c. 1955.