The Origin of "Don't Be Evil"

When my wife was in graduate school to get a master's degree in education, she took a class about how to teach students of different cultures without racial bias. Near the end of the class, one of her classmates said of the textbook they'd been reading, "You know, this book should just be called, 'Don't Be a Dick.' And all the pages could be blank."

I thought of that story recently while reading Steven Levy's book about Google, In the Plex, which includes the origin story of Google's infamous company motto, "Don't Be Evil." It's common these days for bloggers and journalists to point out all the ways in which Google falls short of the ideal expressed in that motto. So it was surprising, for me at least, to learn that the motto actually started as a kind of joke, not unlike the joke my wife's classmate made about not being a dick.

According to Steven Levy, Google held a meeting in 2001 to try to nail down its corporate values. Stacy Sullivan, the head of human resources, stood at the front of the room with a giant notepad, writing down platitudes like, "Google will strive to honor all its commitments." But engineer Paul Buchheit thought the whole thing was absurd.

Levy writes,

Paul Buchheit was thinking, This is lame. Jawboning about citizenship and values seemed like the kind of thing you do at a big company. He’d seen enough of that at his previous job at Intel. At one point the chipmaker had given employees little cards with a list of values you could attach to your badge. If something objectionable came up you were to look at your little corporate values card and say, “This violates value number five.” Lame. “That whole thing rubbed me the wrong way,” Buchheit later recalled. “ So I suggested something that would make people feel uncomfortable but also be interesting. It popped into my mind that ‘Don’t be evil’ would be a catchy and interesting statement. And people laughed. But l said, ‘No, reaIIy.”’

The slogan made Stacy Sullivan uncomfortable. It was so negative. “Can’t we phrase it as ‘Do the right thing’ or something more positive?” she asked. Marissa and Salar agreed with her. But the geeks—Buchheit and Patel—wouldn’t budge. “Don’t be evil” pretty much said it all, as far as they were concerned. They fought off every attempt to drop it from the list.

“They liked it the way it was,” Sullivan would later say with a sigh. “It was very important to engineering that they were not going to be like Microsoft, they were not going to be an evil company.”

I just love the fact that the motto did not originate out of some wide-eyed idealism. Instead, it was an attempt to cut through the whole bullshit concept of "corporate values." It's no wonder the company has had trouble living up to that ideal. "Don't Be Evil" is the implicit motto of every idealistic company before it gets mired in the messy, morally compromised world of actually making money.

Better Living (and Less Anxiety) through Software

It was truly a pleasure to be a guest on Brett Terpstra's podcast Systematic this week. He's had some amazingly interesting folks on the show lately, so I just hope I measure up. We talked about my background in radio and then segued into the topic of anxiety and technology.

Fittingly, I began feeling anxious almost as soon as we finished the Skype call. Not that it wasn't a good conversation, but there was one part where I felt I could have explained myself a lot better. I had been talking about a turning point in my life, when I started my second and last job in public radio.

My first job in radio was writing for a show called The Writer's Almanac, and I was good at it, despite the fact that the show's host was notoriously demanding. In my first three years writing for the show, three producers quit, along with several other writers who either quit or got fired. I was finally the only one left standing, so I became the sole writer and producer, and I persisted for two more years. The day I left, they said I should get a plaque for lasting as long as I did. I thought this constituted evidence of my competence.

And yet, when I moved to a different job on a different radio show, I suddenly felt like the least competent person in the world. This was especially confusing because the new job should have been easier. I was no longer the sole writer and producer of a show, I was just one associate producer within a team. I only had to write bits and pieces of script, do occasional research, write the occasional blog post, answer listener emails, book guests, and help edit audio. None of these jobs was high stakes. It should have been a breeze. But it nearly killed me.

Part of the problem was multitasking. At my previous job, I'd been doing one thing at a time. Write this script. Now write that script. I did most of my work from home in a quiet room. I was allowed to focus.

At my new job, I was always juggling multiple projects: researching the next guest, proofreading the latest script, writing a promo, editing tape. I had always relied on my memory to keep track of my to-do list (I rarely wrote down homework assignments in high school or even studied for tests, and still did well), but my memory completely failed me in this new work environment. I began to worry all the time about whether I had forgotten something. Had I booked that guest for the right time? Had I checked the time zone? Did I fact check that script sufficiently? Should I read it through one more time?

Another problem was the office environment. I worked in a cubicle, with team members all around me. There was little space or time to focus deeply on anything. We were all expected to be on email all the time, injecting our thoughts into one another's brains at will. One of my tasks was to respond to listener email, and every Monday we got a flood of responses to our show, both tremendously positive and viciously negative. And if there had been any factual errors in the show, the listeners would take us to task, and the host would not be happy. I began to dread the weekend, imagining the army of potential attackers amassing and hurling their spears into cyberspace, each blow landing in my inbox on Monday morning.

The result of all this anxiety was that I found it harder and harder to concentrate. I began to make the mistakes I so feared making. Which only made me worry more. I started waking up every night at 3:00 AM, unable to get back to sleep, my mind racing with everything I needed to worry about. Then I started waking up at 2:00 AM. Then 1:00 AM. Then Midnight. If this had continued, I would have started waking up before I went to sleep.

If you have not experienced severe depression or anxiety, you might find it hard to understand is how physical an illness it really is. I did not just feel sick in my head. Every cell in my body felt scraped out and raw. I had no patience for my children. I had no energy to help my wife around the house. Imagine how you feel when you realize something horrible is about to happen: you forgot the essential thing you need for that important meeting, your car is sliding on the ice, or your child is falling head first off the jungle gym in slow motion. Now imagine feeling that kind of dread every waking moment for weeks on end.

That was me at my lowest point. I kept asking myself, "Why can't I do this? This shouldn't be so hard. What's wrong with me?"

In the interview with Brett, I alluded to something I read once that compared depression to a fever (unfortunately, the author was the now-discredited Jonah Lehrer, but I still find the article persuasive). In response to an infection, the body raises its own temperature as a way of killing off the infection. Depression, likewise, raises the frequency of negative "ruminative" thoughts. Psychiatrists have typically seen these kinds of thoughts as part of the problem, but some believe depression may be the body's way of forcing you to focus on what's wrong in your life in order to change it.

Imagine, for instance, a depression triggered by a bitter divorce. The ruminations might take the form of regret (“I should have been a better spouse”), recurring counterfactuals (“What if I hadn’t had my affair?”) and anxiety about the future (“How will the kids deal with it? Can I afford my alimony payments?”). While such thoughts reinforce the depression — that’s why therapists try to stop the ruminative cycle — Andrews and Thomson wondered if they might also help people prepare for bachelorhood or allow people to learn from their mistakes. “I started thinking about how, even if you are depressed for a few months, the depression might be worth it if it helps you better understand social relationships,” Andrews says. “Maybe you realize you need to be less rigid or more loving. Those are insights that can come out of depression, and they can be very valuable.”

Of course, it's important to note that while a fever can help rid your body of germs, it can also kill you. I don't know what might have happened to me if I hadn't talked to a doctor at the time. Medication was definitely part of my recovery. It helped reduce my symptoms so that I could see the root cause of the problem: this was not the right job for me.

So I quit, and took a couple months off before I started my next job. In that time, I realized two things. First, I wanted to learn how to be more organized. Second, I wanted to make time for the kind of deep focus creative work that gave my life real meaning. That was five years ago, and I've managed to accomplish both of those goals, largely with the help of software.

There's been some talk lately about whether software tools actually provide any benefit, and whether software design is solving real problems. But for me, every time I dump my mind into Omnifocus, or add an event to Fantastical, or forward an email with attachments to Evernote, or set a reminder in Due, I feel a little more in control of my life. I can much more easily manage my job as a college writing teacher, juggling multiple projects, multiple classes, lesson planning, grading, committee meetings, department responsibilities, and so on.

Keeping my life more organized also makes it possible to have a clear head when I want to focus on something important. One of my goals after quitting my job was to write a novel, and I finally made time for it. The app Scrivener helped me break the novel down into manageable pieces, and for the first time in my life, writing fiction felt enjoyable rather than fraught. More recently, I was inspired by the power of the app Editorial to start writing this website (and have written almost every post with it).

Of course, there's a danger here. Buying a new notebook and a fancy pen does not make you a writer. Making a to-do list is not an actual accomplishment. Tools are not the end goal, and using a tool, no matter how well-designed, does not make hard work any easier. But the right tool can provide an important cue to help create a habit or build a ritual for doing the actual work.

Software has improved my life by making the work feel more possible, creating virtual spaces where I feel less anxious. And the less anxious I feel, the more I feel capable of doing the work that matters, and the more I feel alive.

The Illusion of Power

I love this Rolling Stone interview with George R.R. Martin, which goes a long way toward explaining why the Game of Thrones books (i.e. Song of Ice and Fire) are so much more than escapist fiction. I read them as a sword and sorcery version of The Wire, with a Hobbesian view of power as the central theme. As Martin says,

One of the central questions in the book is Varys' riddle: The rich man, the priest and the king give an order to a common sellsword. Each one says kill the other two. So who has the power? Is it the priest, who supposedly speaks for God? The king, who has the power of state? The rich man, who has the gold? Of course, doesn't the swordsman have the power? He's the one with the sword – he could kill all three if he wanted. Or he could listen to anyone. But he's just the average grunt. If he doesn't do what they say, then they each call other swordsmen who will do what they say. But why does anybody do what they say? This is the fundamental mystery of power and leadership and war through all history....It's all based on an illusion.

Most people familiar with the books or the TV show remember the dramatic deaths of various characters best, but for me, one of the most powerful scenes in any of the books (mild spoiler from book/season 1), was the moment Ned Stark and Cersei face each other down after the death of the king. Ned holds the king's seal, which he claims gives him the power to rule. Cersei claims the power belongs to her and her son, the heir to the throne. The room is filled with armed guards, who have to decide whom to follow. What makes the scene so dramatic is that Ned and Cersei have no real power. They have no weapons to wield but words. All their power flows from the people around them who choose to believe they have power.

For some reason, that scene lays bare the illusion of power better than almost anything I've ever read. I think about it all the time, in department meetings at the college where I teach, at campus events when the president of my college gives a speech, even when I watch the President of the United States on TV. It reminds me of something the physicist and author Janna Levin said on a radio show where I used to work, about how her cosmological view of the universe sometimes gives her a strange perspective on our race of primates and the ways we organize ourselves on this tiny planet:

You know, for me, it's so absurd, because it's so small and it's so — this funny thing that this one species is acting out on this tiny planet in this huge, vast cosmos. Of course, I take very seriously our voting process and I'm, you know, very, try to be politically conscious. But sometimes, when I think about it, I have to laugh that we're all just agreeing to respect this agreement that this person has been elected for something. And that is really a totally human construct that we could turn around tomorrow and all choose to behave differently. We're animals that organize in a certain way. So it's not that I completely dismiss it or don't take it seriously, but I think a lot of the things we are acting out are these animalistic things that are consequences of our instincts. And they aren't, in some sense, as meaningful to me as the things that will live on after our species comes and goes.

Sharing the Ecosystem

Tim Cook got a lot of attention back in February when he was challenged at a shareholder meeting to explain Apple’s commitment to green energy initiatives. A conservative group of shareholders had put forward a proposal asking Apple to focus only on initiatives that had a clear ROI (return on investment). According to a report in Mac Observer, Tim Cook grew visibly angry at the suggestion:

When we work on making our devices accessible by the blind. I don’t consider the bloody ROI….If you want me to do things only for ROI reasons, you should get out of this stock.

Cook underlined his commitment to the environment again this past week by providing the voiceover for Apple’s promotional video Better, about Apple’s use of solar and wind energy, among other environmentally friendly practices. But it’s worth noting the difference in the message. At the shareholder meeting, Cook seemed to be saying that he doesn’t care about return on investment – doesn’t care about profits – when it comes to doing things that are just right. But in the video he keeps repeating the word “better” in reference both to Apple’s products and Apple’s commitment to the environment. It’s not that he doesn’t care about return on investment; it’s that he’s enlarging the very meaning of the term.

Better. It’s a powerful word and a powerful ideal. It makes us look at the world and want more than anything to change it for the better, to innovate, improve, to reinvent, to make it better. It’s in our DNA. And better can’t be better if it doesn’t consider everything. Our products. Our values.

If Tim Cook hadn’t gotten so angry at that guy at the shareholder meeting, he might have explained that profits are only one return on the investment. If you’re the most valuable company in the world, and you’re not concerned about the impact of your company on the environment, you’re not playing the long game. We all share the same ecosystem. Investing in that ecosystem is investing in the future. It might not look like a profitable investment, but it could yield immeasurable returns.

So I’m heartened by Apple’s apparent commitment to the environmental ecosystem, but I wish they had the same attitude toward their software ecosystem.

I know the history of that ecosystem from my vantage point as a user. I switched to a Mac in 2007, and as much as I loved the hardware, I discovered pretty quickly that the real advantage was the software, and not just the software made by Apple. Independent developers who cared about design, who wanted to make software for individuals rather than the enterprise, had been using Macs and writing Mac software for years. Using those applications for the first time, I began to see software as a kind of art form in and of itself.

The iPhone and the App Store brought that art form to the masses. By creating a place where customers could easily, and without fear, download any number of apps, Apple made software mainstream. Before that, most customers only bought software when they bought their computers, preloaded with an office suite and maybe one or two more apps. The iPhone, and later the iPad, provided both the perfect delivery and the perfect medium for software, because the entire device itself changed based on whatever software had just been launched.

The result was that Apple managed to cultivate what I’d argue was the richest software ecosystem in the history of computing. Which is why it’s so strange that Apple now seems to be on the cusp of letting that ecosystem wither. It’s no secret that the App Store is broken, that developers are having a harder and harder time making good money. Marco Arment has been talking about this for a long time, most clearly when he made his case this past fall that paid-upfront apps are dead. Ben Thomson wrote a series of blog posts around the same time at Stretechery, laying out the reason why Apple is motivated to drive down the cost of apps (and why it's a big picture mistake).

Apple makes money on hardware. It’s in their interest that said hardware be sold for as much of a premium as the market will bear. However, it’s equally in their interest that the complements to that hardware are sold as cheaply as possible, and are preferably free….In the case of apps, the current app store, full of a wide variety of inexpensive apps, is perfect from Apple’s perspective. It’s a reason to buy Apple hardware, and that’s all that matters. Anything that on the surface makes the store less desirable for hardware buyers – such as more expensive apps – is not in Apple’s interest.

This is bloody ROI thinking. In its latest commercials, with iPads on mountain tops and iPhones on motorcycles, Apple wants to trade on the power of its devices to do amazing things. But software is what gives those devices their power. And Apple is doing very little to help sustain the people who create that software, let alone give them the respect and the gratitude they deserve. As Justin Williams recently said about his trip to the Microsoft developer’s conference:

What’s different though is that it feels like Microsoft is more interested in working with us as a partner whereas Apple has always given off a vibe of just sort of dealing with us because they have to.

I find it fitting that the number one request on most people’s lists for iOS 8 is better sharing of information between apps. What Apple needs is better sharing, period. Healthy ecosystems are all about sharing. “Better can’t be better if it doesn’t consider everything.” Just as Tim Cook sees the value in sustaining the world’s ecosystem, he needs to see the value in sustaining the developer ecosystem. It’s those developers who can provide the real return on investment, making both his products, and the world, better.

Screens Aren't Evil

Matt Honan's recent post on Wired about parenting in the age of ubiquitous screens has gotten people like Shawn Blanc and Stephen Hackett talking.

Honan:

But the ever-present touchscreens make me incredibly uneasy—probably because they make parenting so easy. There is always one at hand to make restaurants and long drives and air travel much more pleasant. The tablet is the new pacifier.

For a much more in depth look at this subject, check out Hanna Rosin's piece for The Atlantic, “The Touch Screen Generation,” which I expected to be another hand-wringing expose on the decline of western civilization by way of modern technology. But it's quite the opposite. One of the myths she busts is the idea that screens are inherently less stimulating than something like reading.

An early strain of research claimed that when we watch television, our brains mostly exhibit slow alpha waves—indicating a low level of arousal, similar to when we are daydreaming. These findings have been largely discarded by the scientific community, but the myth persists that watching television is the mental equivalent of, as one Web site put it, “staring at a blank wall.” These common metaphors are misleading, argues Heather Kirkorian, who studies media and attention at the University of Wisconsin at Madison. A more accurate point of comparison for a TV viewer’s physiological state would be that of someone deep in a book, says Kirkorian, because during both activities we are still, undistracted, and mentally active.

I distinctly remember hearing about that alpha wave research, and I took it for gospel ever since that screens make us more passive than books. But it's false! Most of the articles on children and screen-time assume the medium is the problem, but the medium is neutral. In fact, research has shown that interactive content can be more educational than passive content. And yet we still feel more comfortable with our kids reading books for hours than we do with them playing video games for hours.

What we need to focus on is the content. I live in Minnesota, and this past winter was miserable, so I let my kids have more screen-time than usual. One of the things I noticed was the dramatically different effects of different video games. When they played Flappy Bird, they fell into a hypnotic trance. When they played Wii Sports, they were jumping all over the room, actually winding themselves and breaking a sweat. And when they played Minecraft, they came mentally alive, building incredibly complex virtual structures, collaborating with each other, giving each other tips, and talking all the time.

Of course, kids shouldn't spend all day with screens. Devoting yourself to only one activity is always a problem. If your kid played basketball every waking moment and never learned to read, that would be a problem. If they did nothing but read Shakespeare and never moved a muscle, that would also be cause for concern. Moderation is the obvious solution.

But we need to get beyond worrying about whether “screens” are melting our kids' brains. What we need to be conscious of is encouraging our kids, and ourselves, to engage in activities that enrich us. Sometimes that's interacting with each other, sometimes that's a hike in the forest, sometimes thats a great book, and sometimes that's an incredible video game. It's not the medium that matters, but what we take from it.

Update:

Stephen Hackett wrote a follow up post that addresses some of the things I'm talking about.

Not Remembering Ourselves

In a piece about memory for National Geographic Magazine last month – “Remember This”– Joshua Foer profiles both a man who cannot form new memories as well as a woman who can’t stop remembering every single thing that happens to her. In the middle of the piece, Foer takes a detour to discuss technology:

We’ve gradually replaced our internal memory with what psychologists refer to as external memory, a vast superstructure of technological crutches that we’ve invented so that we don’t have to store information in our brains. We’ve gone, you might say, from remembering everything to remembering awfully little. We have photographs to record our experiences, calendars to keep track of our schedules, books (and now the Internet) to store our collective knowledge, and Post-it notes for our scribbles. What have the implications of this outsourcing of memory been for ourselves and for our society? Has something been lost?

I thought of this when I read about and then listened to a podcast discussion of the controversy around Greg Knauss’s iPhone app Romantimatic. Brief recap: Knauss wrote the app to help people remember to contact their “sweetheart” every so often by text or phone. Knauss included a few prefab text messages you could send to your sweetheart, most of which were clearly intended as humorous (e.g. “My phone just vibrated and I thought of you.”) Maybe it was the prefab messages, and maybe it was the current knee-jerk fear of how technology is taking over our lives, but a lot of people freaked out. (I highly recommend Knauss’s meta-analysis of the outrage.)

One of the most vehement critics was Evan Selinger, a Fellow at the Institute for Ethics and Emerging Technology, who wrote a take down of Romantimatic for the Atlantic (“The Outsourced Lover”) and then a companion piece for Wired that linked Romantimatic to other apps that are “Turning Us into Sociopaths” :

While I am far from a Luddite who fetishizes a life without tech, we need to consider the consequences of this latest batch of apps and tools that remind us to contact significant others, boost our willpower, provide us with moral guidance, and encourage us to be civil. Taken together, we’re observing the emergence of tech that doesn’t just augment our intellect and lives — but is now beginning to automate and outsource our humanity." (emphasis in the original)

Note that word “outsource” again, the same word Foer used to describe how technology is taking over aspects of what we used to remember. The implications of “outsource” are not only pejorative, but carry strange class-based economic associations, as if by letting our phones take on certain tasks, we’re stealing jobs from the working masses and taking advantage of cheap labor overseas.

Of course, technology has always promised to make labor cheaper, or at least easier. From the invention of fire to the wheel, from washing machines to nail guns, the goal of technology has always been to lessen the load of physical labor, to “outsource” it, if you will. We don’t find outsourcing our physical labor to be problematic, though. What makes us uncomfortable is when technology encroaches on the labor of our brains rather than our bodies.

This despite the fact that technology has been supplementing our brains for at least as long as we’ve been making tools. Before the internet, before computers, before printing presses, books, or even writing of any kind, people stored information in their brains. But they did so using a primitive technology, i.e. most cultures stored their most important historical and spiritual information in the form of song and verse. Why? Because music and rhyme are easier to remember; they are the original brain augmentation technology.

Then, of course, we invented writing and everything went downhill. Socrates famously spoke out against the invention of writing, saying it would, “create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.” And now here we are, having to be reminded by our phones to send a sweet message to our sweethearts. Just as Socrates predicted, we’ve lost our humanity!

Except not. Because what is humanity? Isn’t the fact that we create these tools part of what makes us human? I’m reminded of a passage from the amazing essay “Mind vs. Machine” by Brian Christian (which later became a book) about an annual Turing Test, in which humans and computers compete to figure out who seems the most human.

Christian delves into the history of computers and points out that the word “computer” actually used to refer to human beings, most of them women, who did the hard work of calculating any sufficiently complex bit of data that needed to be calculated. When the first computer-like devices were invented, they were described as “digital computers”: digital versions of the human workers. Alan Turing himself said, “These machines are intended to carry out any operations which could be done by a human computer.”

Christian writes about the irony of this reversal:

Now it is “digital computer” that is not only the default term, but the literal one. In the mid–20th century, a piece of cutting-edge mathematical gadgetry was said to be “like a computer.” In the 21st century, it is the human math whiz who is “like a computer.” It’s an odd twist: we’re like the thing that used to be like us. We imitate our old imitators, in one of the strange reversals in the long saga of human uniqueness.

This is why I find it odd that apps like Romantimatic are accused of outsourcing our humanity. I’ve been trying out Romantimatic myself in the past couple of weeks. I chose to delete all the prefab text messages (which may have been a design flaw, though I appreciate their sense of humor). Instead, I use the app as an unscheduled, random reminder to think about my wife and tell her what I’m thinking. This does not rob me of my humanity. If anything, it stops me in the middle of my day and reminds me to think about something of greater importance, not unlike a brief meditation or prayer.

Technology is not our savior, ready to deliver some utopian future, but it does not have to be our enemy. It’s been with us since the beginning, from poetry to reminder apps. Far from making us less human, it can even reawaken us to our humanity in the midst of our mechanized, busy lives. We just have to learn how to use it.

Talking About iPads and Real Work

Shawn Blanc and I were apparently on a similar wavelength yesterday, responding to Lukas Mathis's thoughtful piece about Windows 8 and the shortcomings of iPad productivity. I love Blanc's point about how those of us trying to use devices like iPads for "real work" and "real creativity" aren't just nerds. We are nerds, no doubt, but we're also helping shape what those devices are capable of.

The Affordance of Intimacy

The latest iPad commercial struck me as overwrought when it first came out. You know the one, with the mountain climbers, the scuba divers, the documentary filmmaker on the precipice of a waterfall, and a voiceover by Robin Williams from "Dead Poets Society," talking about poetry and what it means to be alive. It's not a terrible commercial. But unlike the holiday iPhone commercial, which showcased how ordinary people, even anti-social teenagers, can do extraordinary things with technology, the iPad commercial seemed to be about how extraordinary people can do extraordinary things with technology, especially if they have extraordinarily protective or specially designed cases for their iPads (and plenty of AppleCare).

But then I listened to John Gruber on his most recent Talk Show podcast. He was talking to Joanna Stern about her piece in the Wall Street Journal, arguing that tablet computers still aren't good for doing "real" work, like working with documents, Microsoft Office, etc. Articles on this subject seem to be a trend.

We've all spent the last 15-20 years using computers to work with documents, usually Microsoft Office documents, so we've come to see that as the primary productive purpose of computers. In a piece about the Surface Pro 2, Lukas Mathis recently detailed all the ways a simple task like writing a job application is easier on a PC than an iPad, how you can have a webpage open as you're writing, grab images and easily embed them into the document, look at a friend's emailed suggestions alongside what you're writing, all the way up to the choice of file formats for the final product:

...you might want to export your letter and CV as PDFs, maybe combine them into a single PDF, or maybe ZIP them. You want to attach the resulting file to an email. It’s reasonably simple on a Mac or PC, but I’m not sure if some of these things are even possible on an iPad.

All of this is true. These are the productivity strengths of the PC: the multi-window, multitasking, multi-file-formatting abilities. But the question isn't whether the iPad is better at any of these activities. The question is whether the iPad is better at any productive activities. And why do we care?

Which brings me back to John Gruber's podcast. Discussing the iPad commercial with Joanna Stern, Gruber made a point that hadn't occurred to me before about what kinds of "work" can be done with a tablet computer.

[That commercial] shows that a lot, if not most, of the things that you could call work or creation that you can do on tablets are things that weren't really good or still aren't good for doing with the laptop. It's new things, right? One of the examples, there's a hockey team and they've got some kind of app and they're using the camera and showing this, and they've got like a play or something, and the guy can draw on the screen...It seems totally natural that the coach is there on the ice with an iPad in his hand, and it would look ridiculous if he was there holding a laptop.

The operative phrase there is "in his hand." When Steve Jobs gave the first public demonstration of the iPad, he famously began the demo by sitting back in a comfortable chair. For some commentators at the time, this signaled the fact that the iPad was a "lean back" rather than a "lean forward" device. Hence, the continuing debate about content consumption over content creation. But it's important to remember the first thing Steve Jobs said as he was sitting down in that chair: "It's so much more intimate than a laptop."

Designers like to talk about affordances, the property of an object that encourages a specific kind of action. Levers afford pulling, knobs afford twisting, buttons afford pushing, and so on. I am not a designer, but I first learned of the concept of affordances in the field of education. Educational psychologists argue that most behavior in a classroom, both good and bad, is the result of affordances. If you wander around the room checking students' homework and you don't give the students anything to do, you shouldn't be surprised if the class descends into chaos. You afforded that behavior.

What makes the iPad stand out from other tablet computers, and what makes it so much more appealing, is that it was designed with intimacy in mind. And I think we're just on the cusp of discovering how that intimacy affords different kinds of behaviors, different kinds of creativity and productivity.

To give just one example from my own life: I left my job as a public radio producer several years ago and took a job teaching writing. My college serves a large population of West African immigrants, many of whom came to this country as refugees, so there are numerous language issues I have to work with in their writing. I determined early on that writing comments on their papers by hand was too difficult. I couldn't fit my chicken scratch words legibly between the lines, and I often ran out of space in the margins.

So I started having them turn in all their papers digitally. That way, I could use Microsoft Word (and eventually Pages) to track changes and insert comments digitally. I even developed keyboard shortcuts so that I could insert certain comments with a single keystroke. This digital system felt more efficient, because I could type faster than I could write, and I didn't have to deal with so much paper.

But there were also certain drawbacks. The process of grading papers felt less humane somehow, like I was merely at the controls of a machine, cranking out widgets. I also didn't love the look of my printed comments: gray boxes with skeletal lines tying them back to the students' original words. My students were often confused about which comments referred to which words.

So recently, I decided to see if I could grade my students' papers entirely with an iPad. I bought Readdle's PDF Expert based on Federico Vittici's review in Macstories, bought myself a decent stylus (since replaced with this one) converted all my students' papers to PDF documents, and got to work.

In his book, "The Hand: how its use shapes the brain, language, and human culture", the neurologist Frank R. Wilson writes,

When personal desire prompts anyone to learn to do something well with the hands, an extremely complicated process is initiated that endows the work with a powerful emotional charge...Indeed, I would go further: I would argue that any theory of human intelligence which ignores the interdependence of hand and brain function, the historic origins of that relationship, or the impact of that history on developmental dynamics in modern humans, is grossly misleading and sterile.

As someone who hasn't enjoyed writing in longhand since I was about ten years old, I was frankly shocked by how different the grading experience felt when I began to annotate my students' words directly on the screen. Somehow, using my hand more directly made all the difference. Not only could I reach out with my pen, circle, and underline, the way I would have on paper, but I could instantly erase and start again, and even zoom in to impossibly small spaces, and then back out again to see the whole document. And if I wanted to use text instead of handwriting, I could just tap in the column and type, or even dictate my words.

When my students got their papers back, they said my comments were much easier to understand, because most of them were written directly beneath the words to which they referred. It seems like a small thing, but the effects matter. Students who had come to this country as refugees were learning how to write better thanks to the tiny words I could scrawl directly on the screen of this device.

The iPad also freed me from my desk. I could still grade at a desk if I wanted, but I could also sit in an easy chair or curl up on the couch. I even spent a whole Sunday morning (one of our recent double digit subzero days in Minnesota) grading in bed.

Which leads me to the biggest difference: how I felt about the process. I didn't dread grading the way I used to. It felt less like grinding away at a machine and more like a creative act. The iPad still allowed me to capture my students' work digitally, so it wasn't a mess of papers, but also engendered this renewed intimacy. By taking my fingers off the keyboard, putting the screen in my hands, and creating that slightly more intimate space, the iPad has turned my interaction with my students' words from an act of digital drudgery to an act of communication.

Can the iPad still improve? Become more powerful? More versatile? Better at inter-app communication? Am I jealous of Lukas Mathis's experience with the Surface Pro's stylus? Of course. But the first thing Apple got right, the most important thing, was how it feels. It's such a small distance from typing in front of a screen to holding the screen in your hands, but something new happens when you reduce that distance. I, for one, am excited to see how developers begin to harness the power that intimacy affords.

Mastering Our Tools

Tim Wu, writing for the New Yorker online, argues that technology can make our lives too easy, presenting the danger that "as a species we may become like unchallenged schoolchildren, sullen and perpetually dissatisfied." The piece feels a bit fear-mongering to me. But I love this:

Anecdotally, when people describe what matters to them, second only to human relationships is usually the mastery of some demanding tool. Playing the guitar, fishing, golfing, rock-climbing, sculpting, and painting all demand mastery of stubborn tools that often fail to do what we want. Perhaps the key to these and other demanding technologies is that they constantly require new learning. The brain is stimulated and forced to change.

With this point, Wu actually undermines his entire premise. Part of being human is enjoying the experience of learning, whether that's learning to play guitar, play a video game, write poetry or write code. When I got my first iPod, I became obsessed with smart playlists. When my wife got her first iPhone, she immediately became obsessed with photography apps. When my children recently started playing the video game Minecraft, they quickly began looking up YouTube videos about how to build different kinds of portals so that they could travel to different dimensions and worlds within that imaginary world.

Rather than snuffing out our desire to learn, technology can actually cultivate that desire by continually giving us new tools to manipulate and master. As I wrote in the very first post on this blog:

No other field (thanks to Moore's Law) is accelerating at quite the same pace towards new possibilities of excellence. Software in particular, unbound by the limits of the physical world, is providing tools that allow us to make things that are more perfect, more precise, more useful, more beautiful. In many ways, technology itself is the both the means and the ends of striving towards excellence.

The Dangers of Meritocracy (in Kids’ Movies)

After I wrote about the problems with the "Chosen One" theme in recent movies (like The Lego Movie), I heard from Paul Wickelson, an old friend and PhD. in American Studies from the University of Utah, who pointed out that my call for more meritocracy in these kinds of movies has its own problems. I enjoyed his thoughtful response so much that I wanted to post it here.

I haven't seen "The Lego Movie" yet, though I hear it is good. In any case, I like this post because although I've been disturbed by the "chosen one" trope, I hadn't thought of it in gender terms--and I think you're right to call attention to these gendered aspects. I definitely think that the "chosen one" theme fits too easily with a phenomenon (probably more common among boys) that one elementary school teacher friend likes to call the "legend in my own mind" syndrome. In this syndrome, kids comfort themselves with the idea of their own wonderful innate talent or special-ness, but never actually produce anything. As she put it, “I’d rather have a kid who has only four cylinders, but is working on all four cylinders, than a kid with eight cylinders working on two.”

That said, even a justifiable critique of the "chosen one" trope doesn't quite solve a larger problem: the problem inherent in the reign of the meritocracy as such. Even if we had a perfectly "fair" system in which hard work and talent was properly rewarded and tracked in the most minute of ways to ensure that rewards only went to those who "deserve" them, we would still not have a just society. Instead, we would have an ultra-competitive society in which worth is entirely calculated according to the dominant standards of measurement: i.e. money, standardized tests, the formal production of "value" defined according to the dictates of the market as an all-knowing institution, etc. And in fact, appeals to the supposed "fairness" of the meritocracy lie behind a lot of the apologies for class inequality these days. Supposedly, the top one-tenth of one percent deserve their money not because they are "chosen" in some mystical way, but because they have worked hard and produced important contributions to society, etc. (Never mind whether any of this is actually true).

In a true meritocracy, then, the most successful people would be those who were willing to work 80 hours a week, engage in ruthless and even destructive competition, and sacrifice everything toward the formal achievement of "success" in any given area. Such people can then be used as exemplars to browbeat the rest of us into ever more frenzied effort. So instead of a collective effort on behalf of everyone (and an emphasis on equality and reciprocity, over and above even an emphasis on "excellence"), we have yet more frantic striving, inequality, and disdain for those who have not achieved that level of success. In the world of education, we see the debate between “excellence” and “equality” at work in the difference between South Korea and Finland. Both of them achieve high educational outcomes, but in South Korea many kids spend an astronomical number of hours working with tutors after school and staying up late into the night studying for grueling exams that separate the wheat from the chaff. In Finland, they focus on providing an equal education for every student. And although kids in Finland take education seriously, they don’t commit suicide at the level of South Korean kids, because their lives are more balanced.

Given this pervasive background buzz of global competition, might not the "chosen one" tropes work as a defensive fantasy against the multicultural/meritocratic framework that now functions as the reigning ideology of contemporary neoliberal capitalism? For instance: given the increase in worldwide competition in the economic realm, the U.S. can no longer fall back on its God-ordained providential status as the "chosen nation," and its citizens must now compete for jobs with motivated people in India, China, and elsewhere. Ala "Kung Fu Panda," the “chosen one” fantasy suggests that the fat, lazy American nevertheless gets the job simply by being chosen (hard-working Angelina Jolie "tiger lady" notwithstanding). But even if the U.S. deserves to be rudely awakened from its self-serving "chosen nation" delusion (and it certainly does!), does it then follow that ruthless global competition is the new, God-ordained system? Is Tom Friedman's "flat world” the "chosen" system? Do we just need to get out there and compete with the Chinese factory workers who live in dormitories and leap up to work at the sound of an alarm bell at 2am because Apple needs a new order of I-Pads ASAP? Or is there another alternative?

I guess my point is this: even a system that is more "fair" on a gender, race, and nationality basis can still be brutal and serve the interests of the elite/powerful forces. As some people have put it, if you're part of an elite and you want to stay in power, it's actually in your interest to construct a gender-open, gay-friendly, multicultural, multinational elite, because then you will have more legitimacy--thereby making it harder for everyone else to fight against your rule. I’m not against gender/race/nationality fairness, but I do question the way that the standard line of gender/identity politics can actually be used to perpetuate class inequalities.

Either way, I totally agree with your post. I just can't help thinking outside of its immediate context!

I especially enjoyed Paul's response because of my own trepidation about where my argument (about who gets to be considered special, and how characters like Hermione and Wyldstyle really are more special than the main characters of their movies) was leading me. As I said on Twitter to Matt Haughey:

Some may argue that this is going way too deep into the implications of a children's movie, but what has greater cultural impact, and deserves greater critical inquiry, than the stories we tell our children?

The Problem with the "Chosen One"

I should start by saying that I loved "The Lego Movie." I laughed with almost inappropriate volume while watching it, nearly cried at the emotional climax (which I will not spoil here), came out of the theater singing the theme song "Everything Is Awesome," and spent dinner with my wife and kids recounting our favorite parts. Moment by moment, it was probably the most entertaining movie I've seen in years.

And yet, something about it did not sit quite right with me, something having to do with the prophesy and the "chosen one."

The theme of the "chosen one" feels so interwoven with the movies of my youth that it's almost hard to pin down its source. The original, for me, was probably Luke Skywalker in "Star Wars," chosen to become the first Jedi in a generation and to defeat the empire. But there was also Jen, the Gelfling chosen to fulfill the prophesy and repair the titular "Dark Crystal" in Jim Henson's masterpiece. I was introduced to T.H. White's version of the story by Disney's "The Sword and the Stone," about a bumbling young squire named Arthur, chosen to be the new king when he inadvertently pulls a sword out of a rock. You can follow the various permutations of this "chosen one" theme over at tvtropes.org, but it should be obvious to anyone paying attention to popular culture that this theme keeps gaining traction, from "The Matrix" to "Harry Potter" to "Kung Fu Panda" to, most recently, "the Lego Movie."

It's obvious why the theme is so appealing. The hero begins most of these stories as utterly ordinary, or even less than ordinary: a farmer in a podunk town, a cook in a noodle restaurant, a office worker in a cubicle, a half-abused kid living under the stairs. And yet, by the end of the story, this utterly ordinary person will learn to wield extraordinary powers, will in fact be the only one who can save the world. Who among the utterly ordinary masses watching these movies doesn't want to dream that we too could become extraordinary?

It's also obvious why this story resonates so strongly in Western culture. It's essentially the story of Jesus, the apparent (but possibly illegitimate) son of a carpenter, born so poor that his mom gave birth in a pen for farm animals. But it turns out he too is the subject of a prophesy, chosen to become (in the words of John the Baptist) "the lamb of God, who takes away the sins of the world." Jesus Christ superhero.

But the Christian overtones of the "chosen one" trope are not what I find disturbing. What I do find disturbing is that so many of the most prominent "chosen ones" in modern popular culture (with only one major exception I can think of) are boys. Of course, it's an old criticism that too many of the heroes in popular culture are male. It's something Pixar and Disney have been working on lately, but sexism is endemic to Hollywood, etc. This is not news.

What is new, or at least new to me, is the realization that so many of these "chosen one" stories are about boys who go through a transformation from ordinary to extraordinary, from the least significant to the most significant person in the universe, all while accompanied by a sidekick who is already extraordinary to begin with. And what's the gender of that extraordinary sidekick? Why, she's a girl of course.

Take Luke Skywallker. While he's helping out his uncle on the farm, buying and fixing junky drones, what's his sister doing? She's a princess, already locked in battle with Darth Vader, already good with a gun, and even has an inkling the force. But is she the one picked to wield a lightsaber to face down her father? No. Instead, Obi Wan and Yoda take their chances on that awkward kid from the farm who knows nothing.

Then there's Harry Potter. While he's busy slacking on exams, playing sports, and sneaking off for snogging and butterbeer, what's Hermione Granger doing? Just becoming the best student in the history of Hogwarts, knowing the right answer to virtually every question, better at magic than anyone else her age. But is she the one who faces down the bad guy? Of course not. She wasn't "chosen".

The same goes for Neo in "The Matrix." The movie starts with a woman in a skin tight leather suit performing incredible feats of Kung Fu agility and power. She can leap between tall buildings. She actually knows what the Matrix is! Can Neo do any of this? Does he know any of this? No. He has to learn it. But he'll be better than that girl ever was. And he won't even break a sweat in his final fight. Because he's the chosen one.

The troubling aspect of this trope becomes especially clear in "Kung Fu Panda" and "The Lego Movie," partly because each movie pokes fun at the very idea of a chosen one. In "Kung Fu Panda," Tigress (voiced by Angelina Jolie) naturally expects to be picked as the Dragon Warrior because she's the best, hardest training Kung Fu student in Master Shifu's dojo. But instead, a clumsy, overweight, slacker Panda gets the job by accident. "The Lego Movie" enacts the exact same scenario, in which "Wyldstyle" (voiced by Elizabeth Banks) expects to become the chosen one because she's the best "Master Builder" training under her teacher Vitruvius, and she possesses ninja-level improvisatory Lego construction powers. Instead, the job goes to Emmet, the utterly ordinary construction worker, king of mediocrity and conformity.

Tigress and Wyldstyle aren't happy to learn the true identity of the chosen one. In fact, they're pissed, and rightfully so. They've been working their assess off to be exceptional, and these guys saunter in and take the top spot without nearly the same qualifications, experience or know how.

Sound familiar? What kind of message is this sending? Stories about chosen ones are really stories about who gets to be, and what it takes to be, exceptional. They're stories about privilege. And I don't just object to the gender imbalance. The problem isn't so much who gets to be chosen but the fact that we're so obsessed being chosen at all.

When our culture celebrates business leaders like Steve Jobs, Mark Zukerberg, and Jeff Bezos, or examines politicians like Ronald Reagan, Bill Clinton or Barack Obama, it rarely holds them up as exemplars of hard work. Instead, they're brilliant, innovative, visionary, charismatic. They possess (were "chosen" to receive) great gifts. But when women reach similar levels of achievement, they're usually praised (or ridiculed) for their dedication and pluck. Working hard has somehow become a feminine, and not-especially admirable, trait.

There is evidence that women are working harder than men in the United States. They've been out performing men in a number of categories, especially education, for years now. And yet they still struggle to reach top positions in business and government. These are the real world Hermiones and Wildstyles, standing in the shadows of their "chosen" male counterparts.

If we keep telling these stories about what it takes to be successful, stories that are also prophesies about who gets to be successful, who gets to be "chosen," those prophesies will be self-fulfilling. It's time we changed the story. I, for one, want my kids to grow up in a world where the Trinitys, Hermiones, Tigresses, and Wildstyles are the real heros, where the prophesy of some old guy in a white beard means nothing in the face of hardworking badassery.

UPDATE: Several people on Twitter have pointed out that The Lego Movie is ironically playing on this trope rather than reenforcing it.

I agree to an extent. The movie's treatment of Emmet as hero is certainly ironic, and reminicent of my favorite episode of the Simpsons, but the ultimate message still rings slightly false. That message (spoiler alert): Emmmet isn't any more "special" than anyone else. The prophesy isn't even real. Anyone can be the most amazing, interesting person in the universe as long as they believe in themselves.

The problem: this also means Emmet isn't any less special than anyone else, namely Wyldstyle. Which he clearly is. (Though I fear that makes me sound like Ayn Rand.)

Fragments Coming Together

Mark O'Connell on how Twitter reveals the ways in which public events can focus our collective minds.

Most days, part of the complex compulsion of Twitter is the fragmentariness of the experience, the way in which, barring some terrible or hilarious or infuriating event, everyone tends to be talking about something different. You scroll through your timeline and you get a witticism, then an entreaty to read an essay or column, then a grandstanding denunciation of some phone company’s subpar customer service, then an announcement of what a specific person’s current jam is, then an accidental insight into some inscrutable private misery. Its multifariousness and thematic disorder is a major element of its appeal. But with the death of someone like Philip Seymour Hoffman or Lou Reed or Seamus Heaney—someone who has left an impression on many, many people—there is a quick and radical convergence of focus.

Nobody Wants to See Your Post ...

You won't believe ™ what Brett Terpstra says you shouldn't do on Social media. Or maybe you will believe it. Basically, the thing you shouldn't do is tell people what they shouldn't do.

There have been multiple articles lately across the parts of the Internet I frequent regarding what one shouldn’t post on their social media accounts. I would like to respond to every one of them by saying “screw you.” I’m pretty sure there’s no Dear Abby for Facebook, and if there is, it isn’t you.

I couldn't agree more.

Magic and Grandeur

In reference to Bill Nye's recent debate with a creationist, Jason Kottke posted this wonderful quote from physicist Richard Feynman about the idea that an artist can appreciate the beauty of a flower, whereas a scientist ruins the beauty by taking the flower apart.

First of all, the beauty that [the artist] sees is available to other people and to me too, I believe. Although I may not be quite as refined aesthetically as he is ... I can appreciate the beauty of a flower. At the same time, I see much more about the flower than he sees. I could imagine the cells in there, the complicated actions inside, which also have a beauty. I mean it's not just beauty at this dimension, at one centimeter; there's also beauty at smaller dimensions, the inner structure, also the processes. The fact that the colors in the flower evolved in order to attract insects to pollinate it is interesting; it means that insects can see the color. It adds a question: does this aesthetic sense also exist in the lower forms? Why is it aesthetic? All kinds of interesting questions which the science knowledge only adds to the excitement, the mystery and the awe of a flower. It only adds. I don't understand how it subtracts.

I had a physics professor in college who told me about a conversation he was having with other physics professors, and one of them referred to him as a "real scientist" (he'd recently published some important research) rather than a mere teacher. He got pissed. He said something like, "I don't see my work as a teacher to be less important than my work as a scientist. If 'real scientists' don't believe that a big part of our jobs, of all our jobs, is to educate people about science, to attract people to science, to spread the gospel about the beauty of science, then science will die."

Phil Plait made a similar argument in Slate about the Bill Nye debate.

Roughly half the population of America does believe in some form of creationism or another. Half. Given that creationism is provably wrong, and science has enjoyed huge overwhelming success over the years, something is clearly broken in our country. I suspect that what’s wrong is our messaging. For too long, scientists have thought that facts speak for themselves. They don’t. They need advocates.

I agree. Scientists do themselves a disservice when they let themselves be portrayed, or even portray themselves, as mere enemies of "magical thinking." Scientists are not devoid of wonder. They are not opposed to magic. Quite the opposite. Science is the study of magic. What is matter? What is energy? What are the stars? What is life? Where did it come from? Asking and trying to answer these questions isn't ruining the magic, it's savoring the magic.

As Darwin himself wrote in "On the Origin of Species:"

It is interesting to contemplate an entangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner, have all been produced by laws acting around us ... Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.

Artificial Communication

After watching "Her," the new Spike Jonze movie about a man falling in love with an artificially intelligent operating system, I got in my car, started it up, and then briefly held my thumb down on the home button of my phone. The phone emitted a cheerful, questioning double beep. "Tell my wife," I said, "'I'm on my way home.'" The phone parsed my words into a text message. A woman's voice asked, "Are you ready to send it?" I was.

It's easy to see the movie as a exaggeration of my interaction with Siri, to argue that our current fixation with technology could lead down a slippery slope to imaginary relationships with artificially intelligent beings like Samantha, the Scarlett Johansson-voiced operating system from the movie. Several articles (like this one) have linked the movie to a famous chat bot named ELIZA, created at MIT in the late sixties, which used vaguely empathetic questions to create the illusion of a conversation with human users. Joseph Weizenbaum, the creator of the chatbot, later wrote,

I was startled to see how quickly and how very deeply people conversing with [it] became emotionally involved with the computer and how unequivocally they anthropomorphized it. Once my secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room.

I expect most people to find it sad, or even disturbing, that humans could be so easily duped by technology. Sherry Turkle (whose theories about how technology is driving us apart may not be supported by the evidence) has written of her horror at observing people interacting with robots.

One of the most haunting experiences during my research came when I brought one of these robots, designed in the shape of a baby seal, to an elder-care facility, and an older woman began to talk to it about the loss of her child. The robot seemed to be looking into her eyes. It seemed to be following the conversation. The woman was comforted.

That final sentence is meant to fill you with dread. The usual narrative about technology in Western culture, going back at least as far as Mary Shelly's "Frankenstein," is that technology makes a lot of promises, but those promises, at best, prove empty. And at worst, they will give rise to monsters that viciously murder everyone we care about. I've written about this before.

The problem with this narrative is that it conflates and denigrates forms of technology that have, in fact, very little to do with each other. My smartphone is both addictive (and maddening) not because it listens to me or simulates empathy, but because it can be so many things. I could use it to check my email, Twitter, Facebook, my RSS reader, my Instapaper queue, Flipboard, Tumblr, Instagram. I could also add an item to my todo list, write a journal entry, write a blog post, take a picture, listen to a podcast, read a book. And just as the device can be many things, so it reminds me that I can be many things: an employee, a teacher, a spouse, a friend, a family member, a reader, a photographer, a writer. I can feel it pulsing with obligations in my pocket. I sometimes find myself flipping through apps, and potential identities, the way I used to flip through TV channels. All that possibility can be overwhelming.

When Steve Jobs introduced the iPhone, he famously said it was three devices: a wide-screen iPod, a revolutionary phone, and a break-through internet communicator. And if you watch the video of that introduction, everyone cheers the idea of a revolutionary phone, not so much an "internet communicator." Of course, as others have pointed out, it was the internet communicator that was the real revolution. And in many ways, it's the phone that's been left behind.

Which is why it's significant that Joaquin Phoenix's character interacts with Samantha, his operating system, through a kind of high fidelity phone call. So much of what feels clumsy and alien about our experience of computers is our ability to communicate with them. What if that communication became entirely familiar, as familiar as a real conversation? This "input method" of a phone call also removes the need for a screen. Instead of staring at a device, Joaquin Phoenix spends much of the movie staring at the world. And even more importantly, rather than presenting an endless array of possibilities, Samantha unifies those possibilities into one experience, the experience of her company.

You can argue about whether such an artificially intelligent operating system would turn out well for humanity in real life, and I don't want to give anything away about the movie, but if a human being derived meaning from such a relationship, I don't see how that meaning is any less relevant, any less meaningful, simply because it's a relationship with something "artificial." Humans have always derived meaning from artificial things. As Brian Christian writes in a piece about "Her" for The New Yorker's "Page-Turner" blog, the original technology that messed with our heads was language itself.

As both an author and a lover of literature, I would be a hypocrite to condemn too strongly the power of indirect or one-way intimacy. I run the disembodied thoughts of some other mind through my own, like code, and feel close to someone else, living or dead, while risking nothing, offering nothing. And yet the communion, I would argue, is real. Books themselves are perhaps the first chatbots: long-winded and poor listeners, they nonetheless have the power to make the reader feel known, understood, challenged, spurred to greatness, not alone.

Writing, drama, printing, photography, motion pictures, recorded music, typewriters, word processors, the internet: all have at various times been called enemies of culture, even of humanity. But the fact is that technology is part of our culture, part of our humanity. Of course there's the potential that we could get lost in the rabbit hole of communicating with an artificially intelligent being, but would that be any better or worse than getting lost in Netflix TV show marathons or Minecraft expeditions? Or, for that matter, spending one's life reading the classics of literature?

What I loved about "Her" was how it depicted an imaginary relationship with technology that was neither utopic nor dystopic. It was just problematic. Like any passionate, fiery relationship.

Humanity and Technology

Upon the launch of David Pogue's new Yahoo Tech site, I was initially excited, as I had long been wishing for a different kind of tech journalism. The initial word coming out of the CES announcement was that the new site would try to inject a little more humanity into tech coverage. All to the good, I thought.

But then I looked at the site, and found a series articles about "What the heck is bitcoin?" "How the internet is blowing your mind!" "How to keep your kids safe on Facebook," and "Why selfies are the end of civilization as we know it!" I'm paraphrasing, but only slightly.

In a paroxysm of disgust, I butted (perhaps rudely) into a Twitter exchange Jason Snell and Stephen Hackett were having about the new site.

@ismh @jsnell Tech journalists need criticism, but Yahoo tech is the disease, not the cure.

— Rob Mcginley Myers (@robmcmyers) January 7, 2014

Jason Snell, a writer I very much admire, did not agree.

@robmcmyers @ismh I'd say that simplifies it far too much. Less coverage of VC investors and more practicality is not a bad concept.

— Jason Snell (@jsnell) January 7, 2014

He's right of course. But the execution of that concept depends entirely upon your definition of "practicality." I agree that the problem with much of technology journalism is that instead of covering technology, it's covering the technology business. This is why there are so many articles about market share and profit share, whether Apple or Google is winning at any given moment, why Blackberry is dying and why Microsoft is fading in relevance.

I find most of that stuff tremendously boring. I'm not a VC funder or an investor, I'm just fascinated by technology, and I want to read thoughtful coverage of it, not coverage of the money it makes or doesn't make. The problem with Yahoo Tech is that it goes too far in the other direction. It's full of articles about quirky apps and products ("Computerized Jacket Visibly Shows Your Excitement Whenever You Eat Chocolate," "This Digital Whale Will Follow Your Mouse Pointer Around"), 5 most important these things, 5 most intriguing those things, 5 steps to accomplishing this other thing.

Maybe "normals" will care about and click on this stuff, but the reason it feels like a "disease" to me is that it spreads the notion that technology is mostly frivolous, there to entertain or distract us briefly before we get back to doing something important.

So it's refreshing to be reading a series of pieces this week that actually inject what I think of as "humanity" into tech journalism. First there was Shawn Blanc's piece on how the iPad has changed his grandfather's relationship to his family.

My Grandpa’s iPad has enabled him to do something that he’s been unable to do for as long as I can remember. The 9.7-inch touch screen has turned my Grandpa into a photographer.

Then there was Federico Vittici's beautiful story of how he bought his first iPod and his first Mac, and how it changed his life.

As the world is wishing a happy 30th birthday to the Mac, I think about my first iPod and I realize just how important Apple's halo effect has been for my generation. Perhaps I was going to buy a Mac anyway eventually because I was too fed up with Windows, but the iPod made me curious, excited, and, more importantly, a loyal and satisfied customer. The Mac made me eager to learn more about Macs apps and the people who were making them, so I decided to write about it and somehow I had a job again and I've met so many great people along the way, every doubt and criticism was worth it.

Finally, there's John Siracusa's piece about the introduction of the Mac, which he calls "the single most important product announcement of my life." I love that the image that he associates most strongly with the computer is the image of the team of humans that built it.

It wasn’t just the product that galvanized me; it was the act of its creation. The Macintosh team, idealized and partially fictionalized as it surely was in my adolescent mind, nevertheless served as my north star, my proof that knowledge and passion could produce great things.

This is the "humanity" we need in tech journalism. How humans strive through technology to make great things, and how humans are affected by those great things that have been made. More of that please.

Artificial Guilt

Great piece in the New York Times Magazine about our Dr. Frankenstein-like quest to play God, subvert sin, and build a better artificial sweetener.

The science on these questions is inconclusive at best. There’s no clear evidence that artificial sweeteners cause cancer or obesity, at least in human beings. But the fear of artificial sweeteners was never quite a function of the scientific evidence — or never of just that. It stems as much from a sense that every pleasure has its consequences: that when we try to hack our taste buds in the lab — to wrench the thrill of sugar from its ill effects — we’re cheating at a game we’ll never win.

Just beware the first paragraph, which may spoil aspects of Breaking Bad for those who have not finished it.

Caught Like Insects in a Web

I’d estimate that the New Yorker has published more than 50,000 cartoons since its first issue in 1925 (I couldn’t find a precise number in a cursory Google search). So it’s surprising to learn that the single most reprinted cartoon of that nearly 90 year history is the one by Peter Steiner from 1993 that says, “On the internet, nobody knows you’re a dog.”

In an interview in 2000, Steiner said the line didn’t feel that profound to him when he wrote it. “I guess, though, when you tap into the zeitgeist you don’t necessarily know you’re doing it.” But the idea quickly caught on as shorthand for the internet’s spirit of anonymity, especially in chatrooms and message boards—a spirit that lives on in sites like Reddit, where “doxing” someone is one of the worst crimes you can commit.

In those early days, the internet felt like an ocean made for skinny-dipping; instead of doffing your clothes, you doffed your identity. You could read about, look at, discuss, and eventually purchase just about anything that interested you, without fear of anyone looking at you funny. This lack of identity could be used for nefarious purposes, of course, and it could lead people down any number of self-destructive paths. But for many, it was liberating to find that, on the web, you could explore your true nature and find fellow travelers without shame.

But as paranoia grows about the NSA reading our emails and Google tapping into our home thermostats, it’s increasingly clear that — rather than providing an identity-free playground — the web can just as easily capture and preserve aspects of our identities we would have preferred to keep hidden. What started as a metaphor to describe the complexly interconnected network has come to suggest a spider’s sticky trap.

I thought of this listening to a recent episode of WTF with Marc Maron. Comedian Artie Lang was telling the story of how he came into his own as a stand-up comedian by exploring, with brutal honesty, the darkness of his personal life. Then he stopped himself for a second to explain that he would never have been able to achieve that level of honesty onstage if he’d worried about his sets appearing on the internet.

Lang: It was before every jerk off had a cellphone taping you. Remember when it was midnight at a club in Cincinnati. It was just you and those people! That was it….Now it’s you and everyone in the fucking world.

Maron: And on Twitter…you can’t do anonymous sets anymore.

Lang: Exactly. An anonymous set is what makes you…The comics are going to get worse man, ’cause they’re gonna check themselves…They’re not gonna wanna see themselves bombing on Instagram or wherever the fuck it is and they’re never gonna take risks.

Where the internet used to encourage risk, now it seems to inhibit it, because it turns out the web can capture anything. What you say in front of friends, or even in front of an audience, can blow away with the wind. On the web, your words can stick around, can be passed around. Celebrities may have been the early victims, but now anyone is fair game. Millions of people are potentially watching you, ready to descend in a feeding frenzy of judgment. In the New Yorker’s Page Turner blog, Mark O’Connell writes about the phenomenon of Twitter users deleting their tweets, something he has seen happen in real time.

It’s a rare and fleeting sight, this emergency recall of language, and I find it touching, as though the person had reached out to pluck his words from the air before they could set about doing their disastrous work in the world, making their author seem boring or unfunny or ignorant or glib or stupid.

Maybe we should treat the web like a public place, with certain standards of behavior. Maybe those who engage in disorderly conduct, posting creepshots and racist tweets, should be exposed and held to account. Perhaps our expectation of anonymity on the internet never made sense. The problem is that the digital trails we leave on web can blur the line between speech and thought, between imagination and intent.

It’s that blurred line Robert Kolker explores in his piece for New York magazine about the so-called “Cannibal Cop,” Gilberto Valle, who never kidnapped, raped, murdered, or cannibalized anyone, but who chatted about it obsessively online. And even though there was little evidence that he took any steps to make his fantasies a reality, his online discussions served to convict him of conspiracy to do so. Kolker writes:

The line between criminal thoughts and action is something the courts have pondered for decades…What’s changed in recent years are the tools used to detect intent—namely, a person’s online activity. “We’ve always said you can’t punish for thoughts alone, but now we really know what the thoughts are,” says Audrey Rogers, a law professor at Pace University. [emphasis mine]

I’m reminded of a recent Radiolab episode about Ötzi, the 5000 year old Iceman discovered in the Alps in 1991. For more than two decades, Archaeologists have poured over the details of his clothing, his possessions, his tattoos, and the arrowhead lodged in his back, evidence he was murdered. From the contents of his stomach, they’ve even determined what he ate for his final meal. I wonder if there will someday be archaeologists who sift through our hard drives, tracing out the many digital trails we’ve left in the web, trying to determine not what we were eating, but what we were thinking. Will their findings be accurate?

To paraphrase John Keats, most lives used to be writ in water. Now they’re writ in code. As much as our digital lives are only partial documents, they often seem more real to strangers simply because they are what has been documented. Maybe the internet doesn’t know you’re a dog, but it doesn’t care. In the eyes of strangers, you are that which the web reveals you to be, because the web is where the evidence is.