Not Remembering Ourselves

In a piece about memory for National Geographic Magazine last month – “Remember This”– Joshua Foer profiles both a man who cannot form new memories as well as a woman who can’t stop remembering every single thing that happens to her. In the middle of the piece, Foer takes a detour to discuss technology:

We’ve gradually replaced our internal memory with what psychologists refer to as external memory, a vast superstructure of technological crutches that we’ve invented so that we don’t have to store information in our brains. We’ve gone, you might say, from remembering everything to remembering awfully little. We have photographs to record our experiences, calendars to keep track of our schedules, books (and now the Internet) to store our collective knowledge, and Post-it notes for our scribbles. What have the implications of this outsourcing of memory been for ourselves and for our society? Has something been lost?

I thought of this when I read about and then listened to a podcast discussion of the controversy around Greg Knauss’s iPhone app Romantimatic. Brief recap: Knauss wrote the app to help people remember to contact their “sweetheart” every so often by text or phone. Knauss included a few prefab text messages you could send to your sweetheart, most of which were clearly intended as humorous (e.g. “My phone just vibrated and I thought of you.”) Maybe it was the prefab messages, and maybe it was the current knee-jerk fear of how technology is taking over our lives, but a lot of people freaked out. (I highly recommend Knauss’s meta-analysis of the outrage.)

One of the most vehement critics was Evan Selinger, a Fellow at the Institute for Ethics and Emerging Technology, who wrote a take down of Romantimatic for the Atlantic (“The Outsourced Lover”) and then a companion piece for Wired that linked Romantimatic to other apps that are “Turning Us into Sociopaths” :

While I am far from a Luddite who fetishizes a life without tech, we need to consider the consequences of this latest batch of apps and tools that remind us to contact significant others, boost our willpower, provide us with moral guidance, and encourage us to be civil. Taken together, we’re observing the emergence of tech that doesn’t just augment our intellect and lives — but is now beginning to automate and outsource our humanity." (emphasis in the original)

Note that word “outsource” again, the same word Foer used to describe how technology is taking over aspects of what we used to remember. The implications of “outsource” are not only pejorative, but carry strange class-based economic associations, as if by letting our phones take on certain tasks, we’re stealing jobs from the working masses and taking advantage of cheap labor overseas.

Of course, technology has always promised to make labor cheaper, or at least easier. From the invention of fire to the wheel, from washing machines to nail guns, the goal of technology has always been to lessen the load of physical labor, to “outsource” it, if you will. We don’t find outsourcing our physical labor to be problematic, though. What makes us uncomfortable is when technology encroaches on the labor of our brains rather than our bodies.

This despite the fact that technology has been supplementing our brains for at least as long as we’ve been making tools. Before the internet, before computers, before printing presses, books, or even writing of any kind, people stored information in their brains. But they did so using a primitive technology, i.e. most cultures stored their most important historical and spiritual information in the form of song and verse. Why? Because music and rhyme are easier to remember; they are the original brain augmentation technology.

Then, of course, we invented writing and everything went downhill. Socrates famously spoke out against the invention of writing, saying it would, “create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.” And now here we are, having to be reminded by our phones to send a sweet message to our sweethearts. Just as Socrates predicted, we’ve lost our humanity!

Except not. Because what is humanity? Isn’t the fact that we create these tools part of what makes us human? I’m reminded of a passage from the amazing essay “Mind vs. Machine” by Brian Christian (which later became a book) about an annual Turing Test, in which humans and computers compete to figure out who seems the most human.

Christian delves into the history of computers and points out that the word “computer” actually used to refer to human beings, most of them women, who did the hard work of calculating any sufficiently complex bit of data that needed to be calculated. When the first computer-like devices were invented, they were described as “digital computers”: digital versions of the human workers. Alan Turing himself said, “These machines are intended to carry out any operations which could be done by a human computer.”

Christian writes about the irony of this reversal:

Now it is “digital computer” that is not only the default term, but the literal one. In the mid–20th century, a piece of cutting-edge mathematical gadgetry was said to be “like a computer.” In the 21st century, it is the human math whiz who is “like a computer.” It’s an odd twist: we’re like the thing that used to be like us. We imitate our old imitators, in one of the strange reversals in the long saga of human uniqueness.

This is why I find it odd that apps like Romantimatic are accused of outsourcing our humanity. I’ve been trying out Romantimatic myself in the past couple of weeks. I chose to delete all the prefab text messages (which may have been a design flaw, though I appreciate their sense of humor). Instead, I use the app as an unscheduled, random reminder to think about my wife and tell her what I’m thinking. This does not rob me of my humanity. If anything, it stops me in the middle of my day and reminds me to think about something of greater importance, not unlike a brief meditation or prayer.

Technology is not our savior, ready to deliver some utopian future, but it does not have to be our enemy. It’s been with us since the beginning, from poetry to reminder apps. Far from making us less human, it can even reawaken us to our humanity in the midst of our mechanized, busy lives. We just have to learn how to use it.

Talking About iPads and Real Work

Shawn Blanc and I were apparently on a similar wavelength yesterday, responding to Lukas Mathis's thoughtful piece about Windows 8 and the shortcomings of iPad productivity. I love Blanc's point about how those of us trying to use devices like iPads for "real work" and "real creativity" aren't just nerds. We are nerds, no doubt, but we're also helping shape what those devices are capable of.

The Affordance of Intimacy

The latest iPad commercial struck me as overwrought when it first came out. You know the one, with the mountain climbers, the scuba divers, the documentary filmmaker on the precipice of a waterfall, and a voiceover by Robin Williams from "Dead Poets Society," talking about poetry and what it means to be alive. It's not a terrible commercial. But unlike the holiday iPhone commercial, which showcased how ordinary people, even anti-social teenagers, can do extraordinary things with technology, the iPad commercial seemed to be about how extraordinary people can do extraordinary things with technology, especially if they have extraordinarily protective or specially designed cases for their iPads (and plenty of AppleCare).

But then I listened to John Gruber on his most recent Talk Show podcast. He was talking to Joanna Stern about her piece in the Wall Street Journal, arguing that tablet computers still aren't good for doing "real" work, like working with documents, Microsoft Office, etc. Articles on this subject seem to be a trend.

We've all spent the last 15-20 years using computers to work with documents, usually Microsoft Office documents, so we've come to see that as the primary productive purpose of computers. In a piece about the Surface Pro 2, Lukas Mathis recently detailed all the ways a simple task like writing a job application is easier on a PC than an iPad, how you can have a webpage open as you're writing, grab images and easily embed them into the document, look at a friend's emailed suggestions alongside what you're writing, all the way up to the choice of file formats for the final product:

...you might want to export your letter and CV as PDFs, maybe combine them into a single PDF, or maybe ZIP them. You want to attach the resulting file to an email. It’s reasonably simple on a Mac or PC, but I’m not sure if some of these things are even possible on an iPad.

All of this is true. These are the productivity strengths of the PC: the multi-window, multitasking, multi-file-formatting abilities. But the question isn't whether the iPad is better at any of these activities. The question is whether the iPad is better at any productive activities. And why do we care?

Which brings me back to John Gruber's podcast. Discussing the iPad commercial with Joanna Stern, Gruber made a point that hadn't occurred to me before about what kinds of "work" can be done with a tablet computer.

[That commercial] shows that a lot, if not most, of the things that you could call work or creation that you can do on tablets are things that weren't really good or still aren't good for doing with the laptop. It's new things, right? One of the examples, there's a hockey team and they've got some kind of app and they're using the camera and showing this, and they've got like a play or something, and the guy can draw on the screen...It seems totally natural that the coach is there on the ice with an iPad in his hand, and it would look ridiculous if he was there holding a laptop.

The operative phrase there is "in his hand." When Steve Jobs gave the first public demonstration of the iPad, he famously began the demo by sitting back in a comfortable chair. For some commentators at the time, this signaled the fact that the iPad was a "lean back" rather than a "lean forward" device. Hence, the continuing debate about content consumption over content creation. But it's important to remember the first thing Steve Jobs said as he was sitting down in that chair: "It's so much more intimate than a laptop."

Designers like to talk about affordances, the property of an object that encourages a specific kind of action. Levers afford pulling, knobs afford twisting, buttons afford pushing, and so on. I am not a designer, but I first learned of the concept of affordances in the field of education. Educational psychologists argue that most behavior in a classroom, both good and bad, is the result of affordances. If you wander around the room checking students' homework and you don't give the students anything to do, you shouldn't be surprised if the class descends into chaos. You afforded that behavior.

What makes the iPad stand out from other tablet computers, and what makes it so much more appealing, is that it was designed with intimacy in mind. And I think we're just on the cusp of discovering how that intimacy affords different kinds of behaviors, different kinds of creativity and productivity.

To give just one example from my own life: I left my job as a public radio producer several years ago and took a job teaching writing. My college serves a large population of West African immigrants, many of whom came to this country as refugees, so there are numerous language issues I have to work with in their writing. I determined early on that writing comments on their papers by hand was too difficult. I couldn't fit my chicken scratch words legibly between the lines, and I often ran out of space in the margins.

So I started having them turn in all their papers digitally. That way, I could use Microsoft Word (and eventually Pages) to track changes and insert comments digitally. I even developed keyboard shortcuts so that I could insert certain comments with a single keystroke. This digital system felt more efficient, because I could type faster than I could write, and I didn't have to deal with so much paper.

But there were also certain drawbacks. The process of grading papers felt less humane somehow, like I was merely at the controls of a machine, cranking out widgets. I also didn't love the look of my printed comments: gray boxes with skeletal lines tying them back to the students' original words. My students were often confused about which comments referred to which words.

So recently, I decided to see if I could grade my students' papers entirely with an iPad. I bought Readdle's PDF Expert based on Federico Vittici's review in Macstories, bought myself a decent stylus (since replaced with this one) converted all my students' papers to PDF documents, and got to work.

In his book, "The Hand: how its use shapes the brain, language, and human culture", the neurologist Frank R. Wilson writes,

When personal desire prompts anyone to learn to do something well with the hands, an extremely complicated process is initiated that endows the work with a powerful emotional charge...Indeed, I would go further: I would argue that any theory of human intelligence which ignores the interdependence of hand and brain function, the historic origins of that relationship, or the impact of that history on developmental dynamics in modern humans, is grossly misleading and sterile.

As someone who hasn't enjoyed writing in longhand since I was about ten years old, I was frankly shocked by how different the grading experience felt when I began to annotate my students' words directly on the screen. Somehow, using my hand more directly made all the difference. Not only could I reach out with my pen, circle, and underline, the way I would have on paper, but I could instantly erase and start again, and even zoom in to impossibly small spaces, and then back out again to see the whole document. And if I wanted to use text instead of handwriting, I could just tap in the column and type, or even dictate my words.

When my students got their papers back, they said my comments were much easier to understand, because most of them were written directly beneath the words to which they referred. It seems like a small thing, but the effects matter. Students who had come to this country as refugees were learning how to write better thanks to the tiny words I could scrawl directly on the screen of this device.

The iPad also freed me from my desk. I could still grade at a desk if I wanted, but I could also sit in an easy chair or curl up on the couch. I even spent a whole Sunday morning (one of our recent double digit subzero days in Minnesota) grading in bed.

Which leads me to the biggest difference: how I felt about the process. I didn't dread grading the way I used to. It felt less like grinding away at a machine and more like a creative act. The iPad still allowed me to capture my students' work digitally, so it wasn't a mess of papers, but also engendered this renewed intimacy. By taking my fingers off the keyboard, putting the screen in my hands, and creating that slightly more intimate space, the iPad has turned my interaction with my students' words from an act of digital drudgery to an act of communication.

Can the iPad still improve? Become more powerful? More versatile? Better at inter-app communication? Am I jealous of Lukas Mathis's experience with the Surface Pro's stylus? Of course. But the first thing Apple got right, the most important thing, was how it feels. It's such a small distance from typing in front of a screen to holding the screen in your hands, but something new happens when you reduce that distance. I, for one, am excited to see how developers begin to harness the power that intimacy affords.

Mastering Our Tools

Tim Wu, writing for the New Yorker online, argues that technology can make our lives too easy, presenting the danger that "as a species we may become like unchallenged schoolchildren, sullen and perpetually dissatisfied." The piece feels a bit fear-mongering to me. But I love this:

Anecdotally, when people describe what matters to them, second only to human relationships is usually the mastery of some demanding tool. Playing the guitar, fishing, golfing, rock-climbing, sculpting, and painting all demand mastery of stubborn tools that often fail to do what we want. Perhaps the key to these and other demanding technologies is that they constantly require new learning. The brain is stimulated and forced to change.

With this point, Wu actually undermines his entire premise. Part of being human is enjoying the experience of learning, whether that's learning to play guitar, play a video game, write poetry or write code. When I got my first iPod, I became obsessed with smart playlists. When my wife got her first iPhone, she immediately became obsessed with photography apps. When my children recently started playing the video game Minecraft, they quickly began looking up YouTube videos about how to build different kinds of portals so that they could travel to different dimensions and worlds within that imaginary world.

Rather than snuffing out our desire to learn, technology can actually cultivate that desire by continually giving us new tools to manipulate and master. As I wrote in the very first post on this blog:

No other field (thanks to Moore's Law) is accelerating at quite the same pace towards new possibilities of excellence. Software in particular, unbound by the limits of the physical world, is providing tools that allow us to make things that are more perfect, more precise, more useful, more beautiful. In many ways, technology itself is the both the means and the ends of striving towards excellence.

The Dangers of Meritocracy (in Kids’ Movies)

After I wrote about the problems with the "Chosen One" theme in recent movies (like The Lego Movie), I heard from Paul Wickelson, an old friend and PhD. in American Studies from the University of Utah, who pointed out that my call for more meritocracy in these kinds of movies has its own problems. I enjoyed his thoughtful response so much that I wanted to post it here.

I haven't seen "The Lego Movie" yet, though I hear it is good. In any case, I like this post because although I've been disturbed by the "chosen one" trope, I hadn't thought of it in gender terms--and I think you're right to call attention to these gendered aspects. I definitely think that the "chosen one" theme fits too easily with a phenomenon (probably more common among boys) that one elementary school teacher friend likes to call the "legend in my own mind" syndrome. In this syndrome, kids comfort themselves with the idea of their own wonderful innate talent or special-ness, but never actually produce anything. As she put it, “I’d rather have a kid who has only four cylinders, but is working on all four cylinders, than a kid with eight cylinders working on two.”

That said, even a justifiable critique of the "chosen one" trope doesn't quite solve a larger problem: the problem inherent in the reign of the meritocracy as such. Even if we had a perfectly "fair" system in which hard work and talent was properly rewarded and tracked in the most minute of ways to ensure that rewards only went to those who "deserve" them, we would still not have a just society. Instead, we would have an ultra-competitive society in which worth is entirely calculated according to the dominant standards of measurement: i.e. money, standardized tests, the formal production of "value" defined according to the dictates of the market as an all-knowing institution, etc. And in fact, appeals to the supposed "fairness" of the meritocracy lie behind a lot of the apologies for class inequality these days. Supposedly, the top one-tenth of one percent deserve their money not because they are "chosen" in some mystical way, but because they have worked hard and produced important contributions to society, etc. (Never mind whether any of this is actually true).

In a true meritocracy, then, the most successful people would be those who were willing to work 80 hours a week, engage in ruthless and even destructive competition, and sacrifice everything toward the formal achievement of "success" in any given area. Such people can then be used as exemplars to browbeat the rest of us into ever more frenzied effort. So instead of a collective effort on behalf of everyone (and an emphasis on equality and reciprocity, over and above even an emphasis on "excellence"), we have yet more frantic striving, inequality, and disdain for those who have not achieved that level of success. In the world of education, we see the debate between “excellence” and “equality” at work in the difference between South Korea and Finland. Both of them achieve high educational outcomes, but in South Korea many kids spend an astronomical number of hours working with tutors after school and staying up late into the night studying for grueling exams that separate the wheat from the chaff. In Finland, they focus on providing an equal education for every student. And although kids in Finland take education seriously, they don’t commit suicide at the level of South Korean kids, because their lives are more balanced.

Given this pervasive background buzz of global competition, might not the "chosen one" tropes work as a defensive fantasy against the multicultural/meritocratic framework that now functions as the reigning ideology of contemporary neoliberal capitalism? For instance: given the increase in worldwide competition in the economic realm, the U.S. can no longer fall back on its God-ordained providential status as the "chosen nation," and its citizens must now compete for jobs with motivated people in India, China, and elsewhere. Ala "Kung Fu Panda," the “chosen one” fantasy suggests that the fat, lazy American nevertheless gets the job simply by being chosen (hard-working Angelina Jolie "tiger lady" notwithstanding). But even if the U.S. deserves to be rudely awakened from its self-serving "chosen nation" delusion (and it certainly does!), does it then follow that ruthless global competition is the new, God-ordained system? Is Tom Friedman's "flat world” the "chosen" system? Do we just need to get out there and compete with the Chinese factory workers who live in dormitories and leap up to work at the sound of an alarm bell at 2am because Apple needs a new order of I-Pads ASAP? Or is there another alternative?

I guess my point is this: even a system that is more "fair" on a gender, race, and nationality basis can still be brutal and serve the interests of the elite/powerful forces. As some people have put it, if you're part of an elite and you want to stay in power, it's actually in your interest to construct a gender-open, gay-friendly, multicultural, multinational elite, because then you will have more legitimacy--thereby making it harder for everyone else to fight against your rule. I’m not against gender/race/nationality fairness, but I do question the way that the standard line of gender/identity politics can actually be used to perpetuate class inequalities.

Either way, I totally agree with your post. I just can't help thinking outside of its immediate context!

I especially enjoyed Paul's response because of my own trepidation about where my argument (about who gets to be considered special, and how characters like Hermione and Wyldstyle really are more special than the main characters of their movies) was leading me. As I said on Twitter to Matt Haughey:

Some may argue that this is going way too deep into the implications of a children's movie, but what has greater cultural impact, and deserves greater critical inquiry, than the stories we tell our children?

The Problem with the "Chosen One"

I should start by saying that I loved "The Lego Movie." I laughed with almost inappropriate volume while watching it, nearly cried at the emotional climax (which I will not spoil here), came out of the theater singing the theme song "Everything Is Awesome," and spent dinner with my wife and kids recounting our favorite parts. Moment by moment, it was probably the most entertaining movie I've seen in years.

And yet, something about it did not sit quite right with me, something having to do with the prophesy and the "chosen one."

The theme of the "chosen one" feels so interwoven with the movies of my youth that it's almost hard to pin down its source. The original, for me, was probably Luke Skywalker in "Star Wars," chosen to become the first Jedi in a generation and to defeat the empire. But there was also Jen, the Gelfling chosen to fulfill the prophesy and repair the titular "Dark Crystal" in Jim Henson's masterpiece. I was introduced to T.H. White's version of the story by Disney's "The Sword and the Stone," about a bumbling young squire named Arthur, chosen to be the new king when he inadvertently pulls a sword out of a rock. You can follow the various permutations of this "chosen one" theme over at tvtropes.org, but it should be obvious to anyone paying attention to popular culture that this theme keeps gaining traction, from "The Matrix" to "Harry Potter" to "Kung Fu Panda" to, most recently, "the Lego Movie."

It's obvious why the theme is so appealing. The hero begins most of these stories as utterly ordinary, or even less than ordinary: a farmer in a podunk town, a cook in a noodle restaurant, a office worker in a cubicle, a half-abused kid living under the stairs. And yet, by the end of the story, this utterly ordinary person will learn to wield extraordinary powers, will in fact be the only one who can save the world. Who among the utterly ordinary masses watching these movies doesn't want to dream that we too could become extraordinary?

It's also obvious why this story resonates so strongly in Western culture. It's essentially the story of Jesus, the apparent (but possibly illegitimate) son of a carpenter, born so poor that his mom gave birth in a pen for farm animals. But it turns out he too is the subject of a prophesy, chosen to become (in the words of John the Baptist) "the lamb of God, who takes away the sins of the world." Jesus Christ superhero.

But the Christian overtones of the "chosen one" trope are not what I find disturbing. What I do find disturbing is that so many of the most prominent "chosen ones" in modern popular culture (with only one major exception I can think of) are boys. Of course, it's an old criticism that too many of the heroes in popular culture are male. It's something Pixar and Disney have been working on lately, but sexism is endemic to Hollywood, etc. This is not news.

What is new, or at least new to me, is the realization that so many of these "chosen one" stories are about boys who go through a transformation from ordinary to extraordinary, from the least significant to the most significant person in the universe, all while accompanied by a sidekick who is already extraordinary to begin with. And what's the gender of that extraordinary sidekick? Why, she's a girl of course.

Take Luke Skywallker. While he's helping out his uncle on the farm, buying and fixing junky drones, what's his sister doing? She's a princess, already locked in battle with Darth Vader, already good with a gun, and even has an inkling the force. But is she the one picked to wield a lightsaber to face down her father? No. Instead, Obi Wan and Yoda take their chances on that awkward kid from the farm who knows nothing.

Then there's Harry Potter. While he's busy slacking on exams, playing sports, and sneaking off for snogging and butterbeer, what's Hermione Granger doing? Just becoming the best student in the history of Hogwarts, knowing the right answer to virtually every question, better at magic than anyone else her age. But is she the one who faces down the bad guy? Of course not. She wasn't "chosen".

The same goes for Neo in "The Matrix." The movie starts with a woman in a skin tight leather suit performing incredible feats of Kung Fu agility and power. She can leap between tall buildings. She actually knows what the Matrix is! Can Neo do any of this? Does he know any of this? No. He has to learn it. But he'll be better than that girl ever was. And he won't even break a sweat in his final fight. Because he's the chosen one.

The troubling aspect of this trope becomes especially clear in "Kung Fu Panda" and "The Lego Movie," partly because each movie pokes fun at the very idea of a chosen one. In "Kung Fu Panda," Tigress (voiced by Angelina Jolie) naturally expects to be picked as the Dragon Warrior because she's the best, hardest training Kung Fu student in Master Shifu's dojo. But instead, a clumsy, overweight, slacker Panda gets the job by accident. "The Lego Movie" enacts the exact same scenario, in which "Wyldstyle" (voiced by Elizabeth Banks) expects to become the chosen one because she's the best "Master Builder" training under her teacher Vitruvius, and she possesses ninja-level improvisatory Lego construction powers. Instead, the job goes to Emmet, the utterly ordinary construction worker, king of mediocrity and conformity.

Tigress and Wyldstyle aren't happy to learn the true identity of the chosen one. In fact, they're pissed, and rightfully so. They've been working their assess off to be exceptional, and these guys saunter in and take the top spot without nearly the same qualifications, experience or know how.

Sound familiar? What kind of message is this sending? Stories about chosen ones are really stories about who gets to be, and what it takes to be, exceptional. They're stories about privilege. And I don't just object to the gender imbalance. The problem isn't so much who gets to be chosen but the fact that we're so obsessed being chosen at all.

When our culture celebrates business leaders like Steve Jobs, Mark Zukerberg, and Jeff Bezos, or examines politicians like Ronald Reagan, Bill Clinton or Barack Obama, it rarely holds them up as exemplars of hard work. Instead, they're brilliant, innovative, visionary, charismatic. They possess (were "chosen" to receive) great gifts. But when women reach similar levels of achievement, they're usually praised (or ridiculed) for their dedication and pluck. Working hard has somehow become a feminine, and not-especially admirable, trait.

There is evidence that women are working harder than men in the United States. They've been out performing men in a number of categories, especially education, for years now. And yet they still struggle to reach top positions in business and government. These are the real world Hermiones and Wildstyles, standing in the shadows of their "chosen" male counterparts.

If we keep telling these stories about what it takes to be successful, stories that are also prophesies about who gets to be successful, who gets to be "chosen," those prophesies will be self-fulfilling. It's time we changed the story. I, for one, want my kids to grow up in a world where the Trinitys, Hermiones, Tigresses, and Wildstyles are the real heros, where the prophesy of some old guy in a white beard means nothing in the face of hardworking badassery.

UPDATE: Several people on Twitter have pointed out that The Lego Movie is ironically playing on this trope rather than reenforcing it.

I agree to an extent. The movie's treatment of Emmet as hero is certainly ironic, and reminicent of my favorite episode of the Simpsons, but the ultimate message still rings slightly false. That message (spoiler alert): Emmmet isn't any more "special" than anyone else. The prophesy isn't even real. Anyone can be the most amazing, interesting person in the universe as long as they believe in themselves.

The problem: this also means Emmet isn't any less special than anyone else, namely Wyldstyle. Which he clearly is. (Though I fear that makes me sound like Ayn Rand.)

Fragments Coming Together

Mark O'Connell on how Twitter reveals the ways in which public events can focus our collective minds.

Most days, part of the complex compulsion of Twitter is the fragmentariness of the experience, the way in which, barring some terrible or hilarious or infuriating event, everyone tends to be talking about something different. You scroll through your timeline and you get a witticism, then an entreaty to read an essay or column, then a grandstanding denunciation of some phone company’s subpar customer service, then an announcement of what a specific person’s current jam is, then an accidental insight into some inscrutable private misery. Its multifariousness and thematic disorder is a major element of its appeal. But with the death of someone like Philip Seymour Hoffman or Lou Reed or Seamus Heaney—someone who has left an impression on many, many people—there is a quick and radical convergence of focus.

Nobody Wants to See Your Post ...

You won't believe ™ what Brett Terpstra says you shouldn't do on Social media. Or maybe you will believe it. Basically, the thing you shouldn't do is tell people what they shouldn't do.

There have been multiple articles lately across the parts of the Internet I frequent regarding what one shouldn’t post on their social media accounts. I would like to respond to every one of them by saying “screw you.” I’m pretty sure there’s no Dear Abby for Facebook, and if there is, it isn’t you.

I couldn't agree more.

Magic and Grandeur

In reference to Bill Nye's recent debate with a creationist, Jason Kottke posted this wonderful quote from physicist Richard Feynman about the idea that an artist can appreciate the beauty of a flower, whereas a scientist ruins the beauty by taking the flower apart.

First of all, the beauty that [the artist] sees is available to other people and to me too, I believe. Although I may not be quite as refined aesthetically as he is ... I can appreciate the beauty of a flower. At the same time, I see much more about the flower than he sees. I could imagine the cells in there, the complicated actions inside, which also have a beauty. I mean it's not just beauty at this dimension, at one centimeter; there's also beauty at smaller dimensions, the inner structure, also the processes. The fact that the colors in the flower evolved in order to attract insects to pollinate it is interesting; it means that insects can see the color. It adds a question: does this aesthetic sense also exist in the lower forms? Why is it aesthetic? All kinds of interesting questions which the science knowledge only adds to the excitement, the mystery and the awe of a flower. It only adds. I don't understand how it subtracts.

I had a physics professor in college who told me about a conversation he was having with other physics professors, and one of them referred to him as a "real scientist" (he'd recently published some important research) rather than a mere teacher. He got pissed. He said something like, "I don't see my work as a teacher to be less important than my work as a scientist. If 'real scientists' don't believe that a big part of our jobs, of all our jobs, is to educate people about science, to attract people to science, to spread the gospel about the beauty of science, then science will die."

Phil Plait made a similar argument in Slate about the Bill Nye debate.

Roughly half the population of America does believe in some form of creationism or another. Half. Given that creationism is provably wrong, and science has enjoyed huge overwhelming success over the years, something is clearly broken in our country. I suspect that what’s wrong is our messaging. For too long, scientists have thought that facts speak for themselves. They don’t. They need advocates.

I agree. Scientists do themselves a disservice when they let themselves be portrayed, or even portray themselves, as mere enemies of "magical thinking." Scientists are not devoid of wonder. They are not opposed to magic. Quite the opposite. Science is the study of magic. What is matter? What is energy? What are the stars? What is life? Where did it come from? Asking and trying to answer these questions isn't ruining the magic, it's savoring the magic.

As Darwin himself wrote in "On the Origin of Species:"

It is interesting to contemplate an entangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner, have all been produced by laws acting around us ... Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.

Artificial Communication

After watching "Her," the new Spike Jonze movie about a man falling in love with an artificially intelligent operating system, I got in my car, started it up, and then briefly held my thumb down on the home button of my phone. The phone emitted a cheerful, questioning double beep. "Tell my wife," I said, "'I'm on my way home.'" The phone parsed my words into a text message. A woman's voice asked, "Are you ready to send it?" I was.

It's easy to see the movie as a exaggeration of my interaction with Siri, to argue that our current fixation with technology could lead down a slippery slope to imaginary relationships with artificially intelligent beings like Samantha, the Scarlett Johansson-voiced operating system from the movie. Several articles (like this one) have linked the movie to a famous chat bot named ELIZA, created at MIT in the late sixties, which used vaguely empathetic questions to create the illusion of a conversation with human users. Joseph Weizenbaum, the creator of the chatbot, later wrote,

I was startled to see how quickly and how very deeply people conversing with [it] became emotionally involved with the computer and how unequivocally they anthropomorphized it. Once my secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room.

I expect most people to find it sad, or even disturbing, that humans could be so easily duped by technology. Sherry Turkle (whose theories about how technology is driving us apart may not be supported by the evidence) has written of her horror at observing people interacting with robots.

One of the most haunting experiences during my research came when I brought one of these robots, designed in the shape of a baby seal, to an elder-care facility, and an older woman began to talk to it about the loss of her child. The robot seemed to be looking into her eyes. It seemed to be following the conversation. The woman was comforted.

That final sentence is meant to fill you with dread. The usual narrative about technology in Western culture, going back at least as far as Mary Shelly's "Frankenstein," is that technology makes a lot of promises, but those promises, at best, prove empty. And at worst, they will give rise to monsters that viciously murder everyone we care about. I've written about this before.

The problem with this narrative is that it conflates and denigrates forms of technology that have, in fact, very little to do with each other. My smartphone is both addictive (and maddening) not because it listens to me or simulates empathy, but because it can be so many things. I could use it to check my email, Twitter, Facebook, my RSS reader, my Instapaper queue, Flipboard, Tumblr, Instagram. I could also add an item to my todo list, write a journal entry, write a blog post, take a picture, listen to a podcast, read a book. And just as the device can be many things, so it reminds me that I can be many things: an employee, a teacher, a spouse, a friend, a family member, a reader, a photographer, a writer. I can feel it pulsing with obligations in my pocket. I sometimes find myself flipping through apps, and potential identities, the way I used to flip through TV channels. All that possibility can be overwhelming.

When Steve Jobs introduced the iPhone, he famously said it was three devices: a wide-screen iPod, a revolutionary phone, and a break-through internet communicator. And if you watch the video of that introduction, everyone cheers the idea of a revolutionary phone, not so much an "internet communicator." Of course, as others have pointed out, it was the internet communicator that was the real revolution. And in many ways, it's the phone that's been left behind.

Which is why it's significant that Joaquin Phoenix's character interacts with Samantha, his operating system, through a kind of high fidelity phone call. So much of what feels clumsy and alien about our experience of computers is our ability to communicate with them. What if that communication became entirely familiar, as familiar as a real conversation? This "input method" of a phone call also removes the need for a screen. Instead of staring at a device, Joaquin Phoenix spends much of the movie staring at the world. And even more importantly, rather than presenting an endless array of possibilities, Samantha unifies those possibilities into one experience, the experience of her company.

You can argue about whether such an artificially intelligent operating system would turn out well for humanity in real life, and I don't want to give anything away about the movie, but if a human being derived meaning from such a relationship, I don't see how that meaning is any less relevant, any less meaningful, simply because it's a relationship with something "artificial." Humans have always derived meaning from artificial things. As Brian Christian writes in a piece about "Her" for The New Yorker's "Page-Turner" blog, the original technology that messed with our heads was language itself.

As both an author and a lover of literature, I would be a hypocrite to condemn too strongly the power of indirect or one-way intimacy. I run the disembodied thoughts of some other mind through my own, like code, and feel close to someone else, living or dead, while risking nothing, offering nothing. And yet the communion, I would argue, is real. Books themselves are perhaps the first chatbots: long-winded and poor listeners, they nonetheless have the power to make the reader feel known, understood, challenged, spurred to greatness, not alone.

Writing, drama, printing, photography, motion pictures, recorded music, typewriters, word processors, the internet: all have at various times been called enemies of culture, even of humanity. But the fact is that technology is part of our culture, part of our humanity. Of course there's the potential that we could get lost in the rabbit hole of communicating with an artificially intelligent being, but would that be any better or worse than getting lost in Netflix TV show marathons or Mindcraft expeditions? Or, for that matter, spending one's life reading the classics of literature?

What I loved about "Her" was how it depicted an imaginary relationship with technology that was neither utopic nor dystopic. It was just problematic. Like any passionate, fiery relationship.

Humanity and Technology

Upon the launch of David Pogue's new Yahoo Tech site, I was initially excited, as I had long been wishing for a different kind of tech journalism. The initial word coming out of the CES announcement was that the new site would try to inject a little more humanity into tech coverage. All to the good, I thought.

But then I looked at the site, and found a series articles about "What the heck is bitcoin?" "How the internet is blowing your mind!" "How to keep your kids safe on Facebook," and "Why selfies are the end of civilization as we know it!" I'm paraphrasing, but only slightly.

In a paroxysm of disgust, I butted (perhaps rudely) into a Twitter exchange Jason Snell and Stephen Hackett were having about the new site.

@ismh @jsnell Tech journalists need criticism, but Yahoo tech is the disease, not the cure.

— Rob Mcginley Myers (@robmcmyers) January 7, 2014

Jason Snell, a writer I very much admire, did not agree.

@robmcmyers @ismh I'd say that simplifies it far too much. Less coverage of VC investors and more practicality is not a bad concept.

— Jason Snell (@jsnell) January 7, 2014

He's right of course. But the execution of that concept depends entirely upon your definition of "practicality." I agree that the problem with much of technology journalism is that instead of covering technology, it's covering the technology business. This is why there are so many articles about market share and profit share, whether Apple or Google is winning at any given moment, why Blackberry is dying and why Microsoft is fading in relevance.

I find most of that stuff tremendously boring. I'm not a VC funder or an investor, I'm just fascinated by technology, and I want to read thoughtful coverage of it, not coverage of the money it makes or doesn't make. The problem with Yahoo Tech is that it goes too far in the other direction. It's full of articles about quirky apps and products ("Computerized Jacket Visibly Shows Your Excitement Whenever You Eat Chocolate," "This Digital Whale Will Follow Your Mouse Pointer Around"), 5 most important these things, 5 most intriguing those things, 5 steps to accomplishing this other thing.

Maybe "normals" will care about and click on this stuff, but the reason it feels like a "disease" to me is that it spreads the notion that technology is mostly frivolous, there to entertain or distract us briefly before we get back to doing something important.

So it's refreshing to be reading a series of pieces this week that actually inject what I think of as "humanity" into tech journalism. First there was Shawn Blanc's piece on how the iPad has changed his grandfather's relationship to his family.

My Grandpa’s iPad has enabled him to do something that he’s been unable to do for as long as I can remember. The 9.7-inch touch screen has turned my Grandpa into a photographer.

Then there was Federico Vittici's beautiful story of how he bought his first iPod and his first Mac, and how it changed his life.

As the world is wishing a happy 30th birthday to the Mac, I think about my first iPod and I realize just how important Apple's halo effect has been for my generation. Perhaps I was going to buy a Mac anyway eventually because I was too fed up with Windows, but the iPod made me curious, excited, and, more importantly, a loyal and satisfied customer. The Mac made me eager to learn more about Macs apps and the people who were making them, so I decided to write about it and somehow I had a job again and I've met so many great people along the way, every doubt and criticism was worth it.

Finally, there's John Siracusa's piece about the introduction of the Mac, which he calls "the single most important product announcement of my life." I love that the image that he associates most strongly with the computer is the image of the team of humans that built it.

It wasn’t just the product that galvanized me; it was the act of its creation. The Macintosh team, idealized and partially fictionalized as it surely was in my adolescent mind, nevertheless served as my north star, my proof that knowledge and passion could produce great things.

This is the "humanity" we need in tech journalism. How humans strive through technology to make great things, and how humans are affected by those great things that have been made. More of that please.

Artificial Guilt

Great piece in the New York Times Magazine about our Dr. Frankenstein-like quest to play God, subvert sin, and build a better artificial sweetener.

The science on these questions is inconclusive at best. There’s no clear evidence that artificial sweeteners cause cancer or obesity, at least in human beings. But the fear of artificial sweeteners was never quite a function of the scientific evidence — or never of just that. It stems as much from a sense that every pleasure has its consequences: that when we try to hack our taste buds in the lab — to wrench the thrill of sugar from its ill effects — we’re cheating at a game we’ll never win.

Just beware the first paragraph, which may spoil aspects of Breaking Bad for those who have not finished it.

Caught Like Insects in a Web

I’d estimate that the New Yorker has published more than 50,000 cartoons since its first issue in 1925 (I couldn’t find a precise number in a cursory Google search). So it’s surprising to learn that the single most reprinted cartoon of that nearly 90 year history is the one by Peter Steiner from 1993 that says, “On the internet, nobody knows you’re a dog.”

In an interview in 2000, Steiner said the line didn’t feel that profound to him when he wrote it. “I guess, though, when you tap into the zeitgeist you don’t necessarily know you’re doing it.” But the idea quickly caught on as shorthand for the internet’s spirit of anonymity, especially in chatrooms and message boards—a spirit that lives on in sites like Reddit, where “doxing” someone is one of the worst crimes you can commit.

In those early days, the internet felt like an ocean made for skinny-dipping; instead of doffing your clothes, you doffed your identity. You could read about, look at, discuss, and eventually purchase just about anything that interested you, without fear of anyone looking at you funny. This lack of identity could be used for nefarious purposes, of course, and it could lead people down any number of self-destructive paths. But for many, it was liberating to find that, on the web, you could explore your true nature and find fellow travelers without shame.

But as paranoia grows about the NSA reading our emails and Google tapping into our home thermostats, it’s increasingly clear that — rather than providing an identity-free playground — the web can just as easily capture and preserve aspects of our identities we would have preferred to keep hidden. What started as a metaphor to describe the complexly interconnected network has come to suggest a spider’s sticky trap.

I thought of this listening to a recent episode of WTF with Marc Maron. Comedian Artie Lang was telling the story of how he came into his own as a stand-up comedian by exploring, with brutal honesty, the darkness of his personal life. Then he stopped himself for a second to explain that he would never have been able to achieve that level of honesty onstage if he’d worried about his sets appearing on the internet.

Lang: It was before every jerk off had a cellphone taping you. Remember when it was midnight at a club in Cincinnati. It was just you and those people! That was it….Now it’s you and everyone in the fucking world.

Maron: And on Twitter…you can’t do anonymous sets anymore.

Lang: Exactly. An anonymous set is what makes you…The comics are going to get worse man, ’cause they’re gonna check themselves…They’re not gonna wanna see themselves bombing on Instagram or wherever the fuck it is and they’re never gonna take risks.

Where the internet used to encourage risk, now it seems to inhibit it, because it turns out the web can capture anything. What you say in front of friends, or even in front of an audience, can blow away with the wind. On the web, your words can stick around, can be passed around. Celebrities may have been the early victims, but now anyone is fair game. Millions of people are potentially watching you, ready to descend in a feeding frenzy of judgment. In the New Yorker’s Page Turner blog, Mark O’Connell writes about the phenomenon of Twitter users deleting their tweets, something he has seen happen in real time.

It’s a rare and fleeting sight, this emergency recall of language, and I find it touching, as though the person had reached out to pluck his words from the air before they could set about doing their disastrous work in the world, making their author seem boring or unfunny or ignorant or glib or stupid.

Maybe we should treat the web like a public place, with certain standards of behavior. Maybe those who engage in disorderly conduct, posting creepshots and racist tweets, should be exposed and held to account. Perhaps our expectation of anonymity on the internet never made sense. The problem is that the digital trails we leave on web can blur the line between speech and thought, between imagination and intent.

It’s that blurred line Robert Kolker explores in his piece for New York magazine about the so-called “Cannibal Cop,” Gilberto Valle, who never kidnapped, raped, murdered, or cannibalized anyone, but who chatted about it obsessively online. And even though there was little evidence that he took any steps to make his fantasies a reality, his online discussions served to convict him of conspiracy to do so. Kolker writes:

The line between criminal thoughts and action is something the courts have pondered for decades…What’s changed in recent years are the tools used to detect intent—namely, a person’s online activity. “We’ve always said you can’t punish for thoughts alone, but now we really know what the thoughts are,” says Audrey Rogers, a law professor at Pace University. [emphasis mine]

I’m reminded of a recent Radiolab episode about Ötzi, the 5000 year old Iceman discovered in the Alps in 1991. For more than two decades, Archaeologists have poured over the details of his clothing, his possessions, his tattoos, and the arrowhead lodged in his back, evidence he was murdered. From the contents of his stomach, they’ve even determined what he ate for his final meal. I wonder if there will someday be archaeologists who sift through our hard drives, tracing out the many digital trails we’ve left in the web, trying to determine not what we were eating, but what we were thinking. Will their findings be accurate?

To paraphrase John Keats, most lives used to be writ in water. Now they’re writ in code. As much as our digital lives are only partial documents, they often seem more real to strangers simply because they are what has been documented. Maybe the internet doesn’t know you’re a dog, but it doesn’t care. In the eyes of strangers, you are that which the web reveals you to be, because the web is where the evidence is.

Songs about Songs

I love what Shawn Blanc said about what Stephen Hackett said about what John Roderick said about focusing one’s attention on creating “primary source material,” rather than mere commentary.

By saying so, however, I fear that I’m engaging in mere commentary — in what Roderick calls, "This chattering sort of criticism and culture digestion that is so much of I guess what we call content — Internet content, which is just like, ’Oh, this just came out and now I’m talking about it and now I’m talking about this other guy who was talking about it.’”

But I’m not sure I would draw such a qualitative distinction between primary and secondary source material. Songs are not empirically better than linked list blog posts. I’d rather read a brief but beautifully crafted post on Kottke or Daring Fireball than listen to a lot of the songs currently on the radio. What matters is the intention, the craft, the effort behind what you make. A close reading of Roderick’s words suggests he might agree.

You know, if you’re making a song, or if you’re writing a story, that is source material. It’s primary. It’s the thing that did not exist before. You’re not commenting. Presumably, your song is not commenting on some earlier song, or if it is, it’s doing it in an inventive way."

The French writer Michel de Montaigne has long been considered the inventor of the essay. The original meaning of the word “essay” was “stab” or “attempt,” because he would take an idea and poke at it from as many different angles as he could think of. He’s more recently been called the godfather of blogging, because he didn’t just write down his own thoughts. He constantly quoted from the authors he was reading and then reflected upon how their ideas comported with his own. He was a great writer but also a great reader and a great commentator. It’s a tradition carried on by bloggers like Kottke, whose work Tim Carmody once described as “watching an agile mind at work, one attached to a living, breathing person, and feeling like you were tapped into a discussion that was bringing together the most vital parts of the web.”

In a piece called Trapped by tl;dr (via Shawn Blanc again) Seth Godin wrote,

“There are thousands of times as many things available to read as there were a decade ago. It’s possible that in fact there are millions as many.”

That’s precisely why we need people who are great readers, people who can sort through the best of what’s out there, who can, in their own way, write songs about the songs we’re all inundated with. And do it in an inventive way.

What the First App Says about Us

MG Siegler believes the first app you open in the morning says something about you.

I see the first app you turn to in the morning as the new homepage. Some might argue it’s your entire homescreen of apps, but I don’t think that’s right. It’s the one service you care most about, no matter the reason, and want to load immediately upon hitting the web. The delivery device has changed, but the concept has not.

What I find interesting is not which app people are choosing to open first thing in the morning, but the fact that apps are the first thing so many of us choose. Siegler traces back his own first app from Twitter to Path to Facebook to Email. For me it would be Twitter to Reeder to Email.

And I think Siegler’s right that before smartphones, it would have been a favorite website on my laptop, something like Slate or Pitchfork or the New York Times. And if I go further back (like a hypnotist regressing the patent to remember former lives), before we even had the internet, it would have been a book, or a copy of The New Yorker, or (even further back) cartoons on TV.

The difference between apps and everything that came before is that the apps we choose now (Twitter, or Facebook, Flipboard, RSS readers) tend to gather and serve up content from myriad, disparate sources. Before apps, we had to choose one source at a time. What I find intoxicating about the apps I open in the morning is the possibility of surprise. As Ben Thompson says, it’s so much more delightful to get the gift I didn’t know I wanted.

But, like Rands and Alexis Madrigal, I agree that this stream of brief interestingness might not be entirely good for me. Perhaps it’s time to try a new first app.

A Software World

At the end of his takedown of an article that calls 2013 “A Lost Year for Tech” (neatly summed up as “a sad pile of piss-on-everything cynicism”), John Gruber writes:

There’s a nihilistic streak in tech journalism that I just don’t see in other fields. Sports, movies, cars, wristwatches, cameras, food — writers who cover these fields tend to celebrate, to relish, the best their fields have to offer. Technology, on the other hand, seems to attract enthusiasts with no actual enthusiasm.

Rene Richie followed up on that point, wondering where the nihilism comes from.

It could just be that computer technology is still relatively new and tech journalists - and tech readers, we feed each other - lack the maturity of older, more established industries and pursuits. It could be that tech writers sometimes seem to care more about being perceived as cool than in being good.

I think this nihilistic streak could be a symptom of the deep-seated suspicion of technology in Western Culture, even among those of us who claim to love it. We all use technology, but we don’t trust it. We fear the power of its “reality distortion field.” We tend to see the experiences it enables as inauthentic, alien, perhaps corrupting, and certainly inferior to “real” experiences. This theme of technology’s malevolent influence is obvious in a lot of science fiction, from Frankenstein to The Matrix, but you can even see it in the training montage from Rocky IV.

The Russian might have a fancy weight room, fancy exercise equipment, and fancy synthetic muscles, but he’ll never triumph over Rocky, who can lift real wagons and real bags of rocks and run to the top of a real mountain.

I thought of that montage when I saw this post from Seth Godin, which makes the fairly reasonable case that our pursuit of productivity (through apps and blogs and devices) often makes us less productive in the end. I take his point until he gives this example:

Isaac Asimov wrote more than 400 books, on a manual typewriter, with no access to modern productivity tools. I find it hard to imagine they would have helped him write 400 more.

First, “400 more” is a pretty high bar. But wouldn’t Asimov have derived some benefit modern productivity tools? Like, say, a computer? If not Asimov, I imagine there were countless people born before the advent of “modern productivity tools” who would have benefited enormously from them. Of course these tools can’t write 400 books for you, but they can reduce the friction just enough to get the ball rolling.

To give just two examples, I was a remarkably disorganized person for most of my life, because I insisted on trying to keep my deadlines, appointments and todo items in my head. Now, with apps like Omnifocus and Due, I’m not only much more organized but also remarkably less anxious about what I might forget. I’ve also been somewhat overweight for most of my adult life, but in the last four years I’ve lost about 50 pounds, mainly due to changes in exercise and diet. But those changes were the direct result of tracking my calories and exercise through the app Lose It. (I had no idea, for instance, that running five miles only burns the equivalent calories of one big sandwich.) Those are the two biggest impacts on my life, but software has, in a variety of ways, also helped me create better lessons for my students, grade papers more thoroughly, capture innumerable moments of my children’s’ lives, stay in touch with people I love, write a novel, and start this blog. My life is significantly better as a result of this technology.

Which brings me back to the nihilism of tech journalists. Few, if any, of these small improvements of the daily life of one person would merit a headline in a tech publication. We tend to expect, and tend only to notice, the big revolutions in technology: the personal computer, the iPod, the smartphone, the tablet. It’s no coincidence that these are all hardware products. Hardware feels more “real” to us. Maybe the reason tech journalists are so often depressed about the state of technology is that hardware revolutions are extremely hard to come by. Dozens of hardware makers get up on stages and set up booths at CES every year touting their new attempts at hardware revolutions. And most of them fall completely flat.

Software doesn’t get the same attention, because it’s less substantial, less trustworthy, and because it’s behind a screen. But software is the real story. Frank Chimero’s fantastic web essay What Screens Want makes this point by citing a video clip of the British documentary program Connections, in which the host describes how plastic has completely permeated our world. Chimero then rewrites the script, replacing the word “plastic” with the word “software.”

It’s a software world. And because of software, it’s a soft world in a different sense, in the original sense of the word: it changes its shape easily.

Software is the magic that makes our devices “indistinguishable from magic”. Many of us think of it as an art form, and yet it’s a strange sort of art form. Most art forms don’t remind you to take out the recycling or help you lose fifty pounds. But the things software can do are almost limitless. Maybe tech journalists would be less cynical about the advances of technology if they wrote more about software than hardware, and more about the how than the what — how software is not only changing its shape, but changing our shape, in more ways than one. That is the real, ongoing technological revolution.

Why (I Hope) Blogs Still Matter in 2014

I started this blog less than six months ago, and for the first three months, I had fewer than a hundred page views. But my readership grew in fits and starts, with a retweet here and a link there, even an occasional block quote, until finally, I arrived home after work a couple weeks ago to find a link to something I wrote on Kottke.org.

Kottke-fucking-dot-org (the New Yorker of blogs, as far as I’m concerned).

My page views went up to 12,000 in a single day, small potatoes for some I’m sure, but a big deal to me. People were starting to follow me on Twitter, sending me messages about how much they enjoyed my writing. After nearly a decade of working in public radio, and then several years writing and struggling to publish a novel, I felt as though blogging was finally giving me a platform and an audience I could call my own.

So imagine my surprise when, just a few days later, in a piece for the Nieman Journalism Lab, Kottke himself announced that, “The blog is dead.” He hedged a bit in a post on his blog, but stood by his main point:

Instead of blogging, people are posting to Tumblr, tweeting, pinning things to their board, posting to Reddit, Snapchatting, updating Facebook statuses, Instagramming, and publishing on Medium. In 1997, wired teens created online diaries, and in 2004 the blog was king. Today, teens are about as likely to start a blog (over Instagramming or Snapchatting) as they are to buy a music CD. Blogs are for 40-somethings with kids.

I’m not quite forty, but I do have kids, so I found this unbearably depressing. Apparently, I have found the right creative platform for myself at precisely the moment it’s fallen out of fashion.

Except I don’t really believe that. And Kottke doesn’t seem to either. The footnote in his blog post about Tumblr (and whether Tumblr blogs are actually blogs) bears this out.

If you asked a typical 23-year-old Tumblr user what they called this thing they’re doing on the site, I bet “blogging” would not be the first (or second) answer. No one thinks of posting to their Facebook as blogging or tweeting as microblogging or Instagramming as photoblogging. And if the people doing it think it’s different, I’ll take them at their word. After all, when early bloggers were attempting to classify their efforts as something other than online diaries or homepages, everyone eventually agreed. Let’s not fight everyone else on their choice of subculture and vocabulary.

So it’s the terminology that’s changing rather than the impulse. And while these alternative services are undoubtedly siphoning off casual users of what used to be blogs, the reason those users are leaving is that blogging platforms don’t provide the easiest access to the intended audience. My wife and I once used personal blogs to share pictures of and stories about our kids. Now we do that on Facebook because Facebook is where the friends and relatives are. If you want to communicate with your social group, you go to the service where your social group congregates, where your message will be conveyed to the largest number of people you know.

But I want to communicate with people I don’t know. And that’s why I think blogs, or personal websites, or single author web publications, or whatever-the-fuck-you-want-to-call-them, still matter.

Rewind about four years. I had just quit a terrible associate producer job in public radio and was failing to make it as a freelancer. That fall, I went to a public radio conference and got to meet one of the Kitchen Sisters (I was so star-struck, I didn’t even know which one she was) and other amazing producers like Kara Oehler and Joe Richman (when he asked me how freelancing was going, I said, “Teaching at a technical college is going pretty well.”) But the most interesting conversation I had that night was with a guy who had been working behind the scenes for most of his career, helping different radio shows find their own unique production style.

I was telling him how I wasn’t sure I could sell the kinds of stories I had been making before the economy crashed, stories about ordinary life with no real news hook. The only show that still had a large freelance budget was Marketplace, and I didn’t want to change my style to suit them. This guy’s advice? Start a podcast. Just make the kinds of stories I wanted to make and put them out in the world, and if the stories were good, the audience would eventually come to me. He cited Jesse Thorn of MaximumFun as a model.

I didn’t follow that guy’s advice (a podcast seemed like too much work), but he did. That guy was Roman Mars. He went on to create and host the amazing show 99% invisible. Not only did the audience come to him, but he recently raised more than $375,000 on Kickstarter to fund the fourth season of the show. His experience echoes the words of Radiolab co-host Robert Krulwich, who offered similar advice in a commencement address to the Berkeley Graduate School of Journalism.

Suppose, instead of waiting for a job offer from the New Yorker, suppose next month, you go to your living room, sit down, and just do what you love to do. If you write, you write. You write a blog. If you shoot, find a friend, someone you know and like, and the two of you write a script. You make something. No one will pay you. No one will care. No one will notice, except of course you and the people you’re doing it with. But then you publish, you put it on line, which these days is totally doable, and then… you do it again.

I had those words in mind when I started my blog six months ago, and I’ve had them in mind whenever I think I should be pitching one of my blog posts to an online publication like Slate or Salon or The Magazine. I’d like to get paid for what I write, but there’s something wonderfully satisfying about owning and controlling my own work. I also don’t want to wait to see if someone will publish it. I want to publish, and see if the audience comes to me.

This is what blogs still offer.

When I first read Kottke’s post on the death of blogs, my knee-jerk fear was that it meant fewer and fewer people would be reading blogs in the near future. What I now think he means is that fewer and fewer people will be writing blogs in the near future. And maybe that’s a good thing. Maybe the rise of social networking platforms will function like a brush fire, clearing out the forest for those of us who want to do more than share a picture, video, or link—those of us who actually want to read, analyze, reflect on, argue with, and write thoughtfully about the stream of information we’re all trying to navigate. Those are the blogs I want to be reading in 2014, and beyond.

Misunderstood or Double-edged?

A lot of people are writing about Apple's latest commercial for the iPhone. Gruber thinks it's their best ad of the year, Kottke calls it one of their best ever. Nick Heer compares it to Don Draper's carousel pitch for the slide projector. But Ben Thompson's take is my favorite because he responds to the ad's critics, who say that Apple is "promoting recording your family over actually spending time with your family."

This criticism is indicative of the recent conventional wisdom that these devices are making us stupid, lonely, and disconnected from the real world. Thompson sees the ad as an attempt to bridge the technological/generational divide, to say the reason we're so obsessed with our gadgets is that they can actually do amazing things.

On the flipside, how many young people – including, I’d wager, many reading this blog – have parents who just don’t get us, who see technology as a threat, best represented by that “can-you-put-that-damn-thing-down-and-join-us-in-the-real-world!?” smartphone in our hands, without any appreciation that it’s that phone and the world it represents that has allowed us to find ourselves and become the person we know they wanted us to be?

In the first half of the ad, the kid is portrayed as self-absorbed, antisocial, even rude in his attention to his iPhone. But why? Would we have seen him in such a negative light if he had been reading a copy of The Catcher in the Rye, or writing in a journal, or drawing in a sketchpad, or noodling on a guitar? The magical, revolutionary thing about an iPhone (and I say this unironically) is that it can become a novel, a journal, a sketchbook, a musical instrument, or a video camera/video editor (with apps like iBooks, Day One, Paper, Garageband, and iMovie among many others).

And yet, we've all seen people ignoring the real world in favor of their device, and they were not involved in a heartwarming creative pursuit. I have looked at Twitter more than once while my children were trying to have a conversation with me. I have even checked Facebook, surreptitiously, while my son, who'd just learned to read, struggled to read a new book to me (not the first book he read to me, but still). I'm not proud of this. And I worry that my kids, who love these devices as much (if not more) than I do, with soon be acting just like the teenager in this ad. And instead of making a beautiful home video during family gatherings, they'll be sexting with Russian oligarchs, selling their non-essential organs, and ordering designer brain implants on some future version of the Silk Road.

That's the double edge of the technology we now have in our pockets. It gives us access to boundless information, and enables all kinds of interactions with that information, but it doesn't distinguish between empty and nourishing information, or help us determine the right uses of that information. We have to make those distinctions and those choices.

One of my favorite pictures of my kids was taken with, edited on, and sent to me from my wife's iPhone. It shows my son and daughter, cuddled together in the dark, their radiant, smiling faces lit from beneath by an unearthly glow. You can't see the object making the glow in the picture, but it's the screen of an iPad. It's also the source of the joy on their faces.

That screen is not going away anytime soon, but we don't have to be passive viewers of it, merely consuming and feeling vaguely guilty about what we consume from it. There's immense creative power behind the screen. Instead of worrying about it, lamenting it, and disparaging it, we should focus on learning how best to use it --- to gather, understand, shape, and share the information around us.

Placebo-philes

Audiophiles have gotten a lot of bad press recently, what with the apparently silly Pono music player (which plays much higher quality audio files despite almost no one being able to hear the difference) and the news from Wired magazine that "burning in" your headphones has no discernible effect on sound quality. Reading about the truly insane things audiophiles will do in pursuit of the perfect sound, I can't help reflecting back on that unfortunate period in my life when I almost fell down the same rabbit hole.

For me it started with a simple search for better headphones. I think I typed "best headphones under $50" into Google, and what came back was a series of lists, like this one or this one, ranking the best headphones at a series of price ranges. I settled on a pair pretty quickly, and when they arrived I loved them, but those lists had planted their hooks in my brain. How much better would my music sound if I were willing to spend just a little bit more?

I decided to research what headphones I would buy the next time I had saved up a decent amount of money, and my research led me to my first (and really only) foray into Internet forums: a website called Head-Fi, where enthusiasts gather to discuss, argue, and bond over their love of headphones and headphone-related accessories. It was a remarkably friendly place for people who enjoyed tuning out the world, but darkness lurked at the edges. People would post glowing reviews of the headphones they just bought, and others would weigh in about how much they loved those headphones too, but inevitably someone would say how those headphones would sound even better if connected to a decent headphone amplifier. Or a decent digital audio converter. Or how those headphones didn't even compare to these other headphones that just cost a little more.

The perfect headphone set up always cost just a little bit more. Audio nirvana was always just out of reach.

Over the course of three years, I wound up buying one pair of headphones that cost about $100, then another that cost about $150, then a headphone amplifier that cost about $100, then another headphone amplifier that cost several hundred, then a special iPod that had been rewired for better sound, then several more pairs of headphones, each more expensive than the last. It helped that the internet made it easy to resell my previous purchases in order to fund my new purchases. At the height of my sickness, my portable sound system looked like this:

portable rig

But that was nothing compared to the equipment (and prices paid) by many. The most money I ever paid for headphones was about $300. But the best headphones were going for more than $1000, and the best amplifiers and related devices were many times that. People would post pictures like this:

high end rig

and I'd wonder what in God's name that would sound like.

I don't think it's an accident that this period in my life was the same period in which I had two children in diapers and an extremely stressful job. After putting the kids to bed, if I didn't have any more work to do, and if my wife wanted to watch TV, I would find a quiet spot in the house and get lost in the increasingly detailed soundstage my gear supplied.

But the specter that loomed over everything was the idea that this was all some big placebo effect. I would occasionally spend an evening listening to a song on my new set of headphones and then on my old set, or with my new amplifier and then my old amplifier. I would make my wife listen to see if she heard a difference. Sometimes she did, sometimes she didn't. Sometimes I didn't. Every once in a while, I'd read a post on Head-fi about someone who was selling everything he'd bought because he realized he was listening to his equipment rather than music. I finally had the same realization and made the same decision. At the time, I felt like a recovering addict, or a victim of a con artist, reformed but slightly ashamed.

I got a new percpective on that period, however, when I this recent piece by Felix Salmon (via Kottke) about converting money into happiness. Salmon is also interested in placebo effects, specifically in the world of wine tasting, where experiments have frequently shown that very few people call tell the difference between cheap and expensive wine, or even the difference between red and white wine. When I first read about those studies, they reminded me of the scene in Brideshead Revisited when a couple guys get drunk and describe the wine they're tasting with increasingly absurd metaphors:

"….It is a little shy wine like a gazelle." "Like a leprechaun." "Dappled in a tapestry window." "and this is a wise old wine." "A prophet in a cave." "and this is a necklace of pearls on a white neck." "Like a swan." "Like the last unicorn."

I had moments almost as absurd with my headphones, when I heard things inside songs I swore I'd never heard before, when I felt as if parts of the music were two dimensional backdrops and then three dimensional shapes would leap out of the picture towards me, or the music would drizzle over my head, or crackle like lightning, or I'd swear I could smell the studio where the song had been recorded, or something.

In other words, I was an idiot. Because on other nights, usually after I'd owned that same set of gear for a little while, I wouldn't hear those things any more, and I'd start thinking that I needed better gear. I needed a new placebo affect.

It's easy to sneer at the placebo effect, or to feel ashamed of it when you're its victim. And that's precisely why I found Felix Salmon's piece revelatory, because instead of sneering at the placebo effect of fancy wine, its marketing, and its slightly higher prices, he thinks we should take advantage of it. If the placebo effect makes us happy, why not take advantage of that happiness?

The more you spend on a wine, the more you like it. It really doesn’t matter what the wine is at all. But when you’re primed to taste a wine which you know a bit about, including the fact that you spent a significant amount of money on, then you’ll find things in that bottle which you love ... After all, what you see on the label, including what you see on the price tag, is important information which can tell you a lot about what you’re drinking. And the key to any kind of connoisseurship is informed appreciation of something beautiful.

This idea of "informed appreciation" reminds me of another area of modern life beset by placebo effects: the world of alternative medicine. In a recent article for the Atlantic, David H. Freedman argues that there's virtually no scientific evidence that alternative medicine (anything from chiropractic care to acupuncture) has any curative benefit beyond a placebo effect. And so many scientists are outraged that anyone takes alternative medicine seriously. However, there is one area where alternative medicine often trumps traditional medicine: stress reduction. And stress reduction can, of course, make a huge impact on people's health. The Atlantic article quotes Elizabeth Blackburn, a biologist at the University of California at San Francisco and a Nobel laureate.

“We tend to forget how powerful an organ the brain is in our biology,” Blackburn told me. “It’s the big controller. We’re seeing that the brain pokes its nose into a lot of the processes involved in these chronic diseases. It’s not that you can wish these diseases away, but it seems we can prevent and slow their onset with stress management.” Numerous studies have found that stress impairs the immune system, and a recent study found that relieving stress even seems to be linked to slowing the progression of cancer in some patients.

Perhaps not surprisingly, a trip to the chiropractor or the acupuncturist is much more likely to reduce your stress than a trip to the doctor. If anything, a trip to the doctor makes you more anxious.

Maybe each of these activities (listening to high end audio gear, drinking high end wine, having needles inserted into your chakras) is really about ritualizing a sensory experience. By putting on headphones you know are high quality, or drinking expensive wine, or entering the chiropractor's office, you are telling yourself, "I am going to focus on this moment. I am going to savor this." It's the act of savoring, rather than the savoring tool, that results in both happiness and a longer life.

Of course, you don't need ultra high end gear to enjoy your music, or ultra high end wine to enjoy your evening, just as you shouldn't solely use acupuncture to treat your cancer. It might be as effective to learn how to meditate. But maybe we all just need to meditate in different ways.