The above faux O’Reilly edition really struck a nerve with me. Obviously software engineering on big projects continues apace. But for many quick & dirty tasks instead of laboriously (or frankly, not so laboriously) assembling together a script often a precise query into Stack Overflow suffices. It reminds me of the universe of David Brin’s Uplift saga.
In that universe there is a Galactic Library, which basically allows sentient species to not need to reinvent the wheel. Humans are unique because of their need to understand the technology that they use. At the time I found the premise interesting…but after reading Joe Henrich’s The Secret Of Our Success, I have come to think that the galactic civilization which Brin depicts is simply an extrapolation of our own technological world. Much of our life is a magic “turn-key” black-box. It is probably self-evident that average humans don’t know how computers or even automobiles work. But Henrich points out that even customs such as manner in which indigenous American peoples detoxify cassava is “encapsulated” from conscious understanding. The “galactic library” is just a metaphor for the crutch that is social cognition.
It looks like people are searching for Facebook less often on Google over the past 3 years. Probably because everyone knows about Facebook now. People have been looking for signs of decline for many years. I was in that game too. It seemed inevitable. But perhaps Facebook is going to be the boring and persistent “climax ecosystem” of the social web for the next generation or so?
I remember very precisely that it was in the spring of 2008 that I finally transitioned toward being a total desktop Linux user. Basically I’d been in Linux for a few days…forgotten, and tried to watch something on Netflix streaming. I then realized I wasn’t in Windows! Now that Netflix works on Ubuntu I don’t really use Windows at all. I still have a dual-boot notebook, but I have two desktop computers than are Linux only machines.
Well, it looks like I’m somewhat of an outlier. I think the rise of Mac utilization among nerds over the past 10 years has really had an effect. Since you can go into the terminal on a Mac it removes a lot of the advantage of Ubuntu, which after all is still somewhat less “turn-key” that Windows or Mac OS.
Then of course there’s Android. So in a way Linux has won. Just not in the way people were imagining in the mid-2000s.
The newTNR seems to be a weird mismash of SJW-clickbait and interesting pieces on aspects of culture which the oldTNR probably wouldn’t have thought to publish. I doubt that this version of TNR is long for this world, but I do appreciate pieces such as this conversation between Neil Gaiman and Kazuo Ishigoru, Breaking the Boundaries Between Fantasy and Literary Fiction. It’s rather self-indulgent, but what do you expect? The main question which they circle around is the nature of genre boundaries. This portion really jumped out at me:
NG: I loved the idea, because it seems to me that subject matter doesn’t determine genre. Genres only start existing when there’s enough of them to form a sort of critical mass in a bookshop, and even that can go away. A bookstore worker in America was telling me that he’d worked in Borders when they decided to get rid of their horror section, because people weren’t coming into it. So his job was to take the novels and decide which ones were going to go and live in Science Fiction and Fantasy and which ones were going to Thrillers.
KI: Does that mean horror has disappeared as a genre?
NG: It definitely faded away as a bookshop category, which then meant that a lot of people who had been making their living as horror writers had to decide what they were, because their sales were diminishing. In fact, a lot of novels that are currently being published as thrillers are books that probably would have been published as horror 20 years ago.
When I was an adolescent the way I would decide how to purchase a book, usually a paperback science fiction or fantasy, was to look for specific authors and covers. There wasn’t really that much planning or research ahead of time. There was a great deal of serendipity involved.
Things are different today. Usually before buying something in person I do some research online. Also, recommendation engines are pretty useful, and good at guiding you to a narrower set of choices attuned to your preferences. This obviates the need to some extent for genre categories as guides in the first place.
I’m thinking of this specifically because apparently Spotify Wants Listeners to Break Down Music Barriers (well, according to Farhad Manjoo). It makes sense for Spotify sense it has so much more data to work with than old style radio stations. Similarly, at some point Amazon will have enough reading and purchase information to get really good at pointing you to authors and works that are suited to your interests.
In 2008 my friend Michael Vassar, in agreeing with Peter Thiel’s thesis about the decline of innovation, suggested that the only game changing technology of the 21st century so far had been the iPhone. 2008 was young year yet for what we then termed “smartphones,” which my daughter now thinks of simply as the “phone.”* I remember vaguely that my response was that the 21st century was young, and we didn’t know what impacts the new phones would have on our every day life.
One truism has been that the new phones have cannibalized whole sectors. Think maps and watches. This week I realized that it had finally happened to my iPod shuffle, from which I have been moderately inseparable since January of 2008. The morning checklist of what I have on me no longer necessarily includes a shuffle, because as long as I have a power source (as I do at the office) there’s no reason why the phone’s battery life should be an inconvenience.
The 19th century was the age of steam and the train. The 20th century was the age of oil and the automobile. We never really had a nuclear age. But it looks like this century will be the age of electricity and the phone. Though what we mean by “phone” is going to change a great deal, to the point where the term itself will be a curious anachronism. Children in the next generation my wonder why we call them phones in the first place.
* To be fair, in terms of pure telephone utility I think the older flip phones were better as single feature devices than the current smartphones (battery life, robustness, etc.).
Periodically in my Facebook feed I get people posting articles like this, Science Has Great News for People Who Read Actual Books. By “actual books” the author means a physical book, and in particular a codex. Apparently at a conference last month a study was presented where a sample of 50 individuals produced a result where there was more recall of plot points from 30 page story better when reading on a book than an e-reader (N = 25 for each treatment). The report in The Guardian finishes:
The Elizabeth George study included only two experienced Kindle users, and she is keen to replicate it using a greater proportion of Kindle regulars. But she warned against assuming that the “digital natives” of today would perform better.
“I don’t think we should assume it is all to do with habits, and base decisions to replace print textbooks with iPads, for instance, on such assumptions. Studies with students, for instance, have shown that they often prefer to read on paper,” she said.
First, someone who is presenting a huge result based on N = 50 (and a W.E.I.R.D. one at that) has a lot of chutzpah in advising caution at such broad general statements. The only true digital natives today are under the age of 10 when it comes to reading on devices such as the Kindle. These sorts of studies seem keen on reiterating the prejudices of the contemporary median readership. Who cares what median students prefer today? A few years ago MySpace was preferred. I believe that if these studies were going on in the 4th century A.D. then you’d see just how much the well educated Roman preferred the scroll to the uncouth codex (though a well educated Greek slave was probably the most “haptic” and “serendipitous” reading device of all!). The “actual book” is actually an innovation, as widespread utilization of the codex format took centuries to become the norm. The Christian Bible was one of the first books habitually in the codex format, and the spread of Christianity has been credited with the popularization of the codex in relation to the scroll. The point is that a “book” is an abstraction. The codex, scroll, or e-reader, is its concrete manifestation. Perhaps it is true that the codex format is ideally optimized for human comprehension. I suspect not. Humans are much more prejudiced toward their habits than they are optimized toward reading. Humans didn’t evolve with reading.
There are real problems with the e-reader format. I dislike being unable to jump between pages “naturally” too. But, I’m rather sure that these problems will be solved at some point. The codex has had 2,000 years. Give e-readers at least another 10.
Also, let’s keep it real, the average American does not read very much. The main reason I’ve mostly switched to e-reader format is that I hate having to lug around many books (and I am not a hoarder, I sell/discard books regularly). If the mean number of books read is 12, while the median is 5, you know the distribution isn’t normal. There are many people who don’t read at all, and a few who read a lot. Of those 12 books many are going to be paperbacks. It’s pretty easy to imagine storing a dozen paperbacks. I have a lot of textbooks, as well as academic press books. And I’m on the mild side compared to people who are older or have more of a hoarding habit. I wonder how much an acceptance of the convenience of the e-book formats correlates with people who read too much to not clutter their houses if they stick with the traditional physical formats.
Addendum: If my hardcover books could be compressed somehow so they took up minimal space I might prefer hardcover to e-book. The main thing I would miss is the search features, but I might trade that for being able to jump easily between pages (there are indexes!). I’m not sure that my daughter or son would make the same decision though.
One of the most realistic aspects of Boyhood was the rise of ubiquitous mobile technology in the period that the narrative encompassed. It is common for many skeptics of technological innovation to suggest that the future just isn’t what it was chalked to be. We don’t live in glass encased arcologies connected by sky-bridges. Rather, the future has been more about the less visually striking spread of an invisible web of information which has infused all aspects of our lives. For example, recently I was running with a friend who wondered about the origins of a particular brewery. I had my phone on my person since it was tracking our running, so I pulled it out and asked the question verbally. The phone thought for a little while and spit back the appropriate answer (the brewery was located in Orange county, but the trail it was named after is in northern California). We take this for granted now, but even 10 years ago this would have seemed amazing. My friend Michael Vassar told me in 2008 that he agreed with Peter Thiel’s skepticism of the nature of modern technological innovation, pointing out that of late only the iPhone has been notable. But at that time I don’t think we grasped how transformative the iPhone was. Ultimately I suspect it will usher in the age of ubiquitous personal computing in all aspects of our lives, not just when we sit down at a desk and boot up a notebook or tower.
And it’s not just smartphones. As I mentioned I’m going to visit New York City for the first time in ~4 years, and to prep I downloaded some helpful apps, and constructed a Google calendar with places and times nailed down precisely. I didn’t do this the previous instances. What changed? First, I’ve become habituated to putting everything into calendars, and squeezing as much ‘productivity’ out of every unit of time as possible. Second, an integrated ecosystem of applications now exists to enable this sort of planning without much hassle. I have access to my calendar on my phone and any computer I have access to. Instead of an analog world where one has a qualitative sense of progression through time, things are becoming digitized, discrete instances perfectly separated. “Just-in-time” gratification services such as Uber obviously are perfectly suited to the mentality of someone like me, for whom the phone has become an avenue by which I extend my influence to and operate upon the world. Whether you think this is good or bad, it is of great consequence. And though there is some commentary on the changes that are occurring, I don’t think it is commensurate to the silent social revolution.
My household has three Kindle Fire tablets (two of them HD). Obviously they are used for things besides reading books, but the main reason for their purchase was as text delivery devices. If I an extra house to store physical books and a manservant of some sort to manage the collection, I would be very happy with “dead tree.” I had a professor years ago who admitted he had an extra house which he ended up filling with his enormous book collection, to the annoyance of his wife. I can’t imagine being in that situation, but my “book habit” was getting out of control by the middle years of the 2000s. Moving was starting to become a major chore which I dreaded because of the boxes of books. And I don’t miss lugging around large numbers of books when I’m going on a road trip. I am well aware that there are unintended downsides to signing on to the e-book revolution, and Amazon in particular. But the convenience factor is just too high. And yes, I’m a pretty big user of Amazon Prime; I never liked physical shopping.
So I was curious when Amazon launched a subscription book service. The New York Times reviews the pro’s and con’s, Amazon Unveils E-Book Subscription Service, With Some Notable Absences. Some people are calling it a glorified library card. If that was the case I would probably sign up. But looking at the collection of books I don’t see many recent academic press publications, which is the largest proportion of my reading. So as it it happens it isn’t a glorified library card. So I’m not signing up, even though the price point isn’t high at all.
Two weeks ago I saw the film Mud. It’s one of the few “serious” movies I’ve watched over the past twoyears. I can’t tell you what Pacific Rim was really about despite having viewed it five days ago (aside from the striking fact that of the protagonists two were played by bizarrely similarlooking actors). And yet aspects of Mud have stuck with me. Why? It’s not because of the plot, which was laughably implausible. Or the development of the characters, which I found a bit overwrought or cliche in most (though not all) cases. Rather, I am still reflecting upon the depiction of the main young protagonist, a fourteen year old played by Tye Sheridan, and the landscape upon which he is “coming of age,” the central theme of the film. The specific details of the concerns of a teenage boy navigating new found feelings toward the opposite sex, an unstable family life, and a pedestrian rural milieu, are not novel. Rather, it was the whole portrait which I think warrants further exploration here in 2013.
Not Arkansas, but not that different Credit: Gary Halvorson, Oregon Archives
Though notionally set in the town of De Witt in the Mississippi delta area of Arkansas, Mud could have played out in any region of small town America without changing the substance of the film. But one thing that perplexed me: when do the events depicted in the film occur? There is one scene in the film where one of the teens stumbles upon a stash of Penthouse magazines, to his great excitement. A depiction of print pornography strikes me as a tell that the film is set before 1995. Mud is also a movie where landline telephones abound and are of great utility. No one uses mobile phones. But the chronology is never explicit, and not all elements line up to a pre-1995 era. One of the supporting characters is alluded to have seen action in Vietnam. If this character was 30 in 1970, then he would have been in his early to mid 50s before 1995. As it is the actor playing the character is 70, and he looks to be depicting an individual of his real current age. If that character was 30 in 1970, then he would be 70 “now,” which would mean that the film was set in 2010. The real now.
It brings to mind one critique of Whit Stillman’s Damsels in Distress: a fictional liberal arts campus where no one seems to be using cellular phones. But could the clever and mannered repartee so central to Stillman’s seminal Metropolitan even be plausible today?* I can’t believe that the dialogue between upper middle class New York City young adults would be possible at such length without the interruption of texts, etc. Similarly, the innocence and naivete of the young adolescents in Mud is probably not believable in a world with smartphones. This made me reflect upon my own youth…I was a teenager in the early to mid 1990s, the very last of those for whom print pornography might be titillating. Additionally, I lived in a town in the Intermontane West not too different from De Witt in its sense of isolation, and frankly backwardness. Though I always assumed that I would have a career which related to science, many of my close friends were much more like the young characters in Mud in their family circumstances and aspirations.
I did not live and grow up in a mythic and fantastical past. But the fact that visual narratives which are presumably attempting to comment in some deep fashion upon the human condition such as Mud and Damsels in Distress have to implicitly peel away essential and ubiquitous aspects of contemporary modernity is notable (in particular, information technology). It is as if this is necessary for them to be able to further their aims of exploring the texture of their characters strikes without distraction. There is a reason that Greek mythology or contemporary epic fantasy appeal to us despite their strange and startling settings. With contexts stripped out of any modern referent the only “signal” that comes through are the crisp and universal ones. Certain relationships, the order of things, have become muddied and convoluted in the layers of complexity which modernity compels us to accept as a matter of course. An ahistorical 2013 no longer encumbered with the distractions of modern necessities is a cleaner canvas upon which one can paint the grand themes of life.
To give an example of what I’m thinking of, there is one scene in Mud where the teenage protagonist looks longingly upon an object of his inchoate affections in the parking lot of a Piggly Wiggly. I remember these sorts of activities in my own misbegotten youth. Doesn’t it seem much more plausible though that today you would “Facebook stalk” the objection of your adolescent affections? I can’t imagine watching a film where a substantial fraction of the screen time involves surfing the web, or texting back and forth. But that’s exactly what would have to happen in a a coming of age film embedded in an information technology rich world!
Every generation thinks that it is the last and first in some deep fundamental way. Most are not. But I think it is possible that my own generation is first and last. The last of those weaned on low bandwidth analog 20th century information transmission devices. The first of those navigating the most primitive of the high bandwidth 21st century information technologies.
* Though filmed in 1989-1990, it was actually set in the late 1960s, though that was not particularly emphasized.
If you read this weblog via its RSS Feed and Google Reader you are probably aware that you will have to stop doing this by July 1st. Here’s a an article with links to replacements. I will enter into the record that I now use Feedly and I don’t miss Google Reader at all.
Gina Kolata’s piece in The New York Times, Poking Holes in Genetic Privacy, is stirring a lot of debate. In the wake of the NSA leaks that makes sense. And genetic privacy has always been a “hot button” issue for obvious reasons, as personal genomics transforms from a futuristic projection to a ubiquitous part of our lives. It seems to me that there’s a spectrum of reasonable objection here. I don’t think it’s a big deal if you are exposed for your “true ethnicity.” Yes, if we lived in Nazi Germany this might matter, but we don’t, and it doesn’t. There’s the reality that ethnicity is easy to ascertain without consent just by looking at someone. On the other hand if you or someone in your family carries a highly penetrant autosomal disease, then I think the rationale for genetic privacy is much stronger.
But it’s not just genetics. Some people have asserted that Google Glass is a replacement for smartphones. If so, we should be concerned if we’re concerned about privacy more generally. For example a few months ago I was asking a friend of mine the age of her father. She didn’t have any idea with any degree of precision, so I just looked up who she was related to using free online databases with my Android phone. Her father’s age was listed right there. Additionally I was surprised to find that he’d lived in the Pacific Northwest 30 years ago, though my friend seemed uninterested in this biographical data. Rather, she was a touch alarmed that within 30 seconds I retrieved this sort of information, and incidentally noted her sisters’ names, residences and employment histories.
Imagine this sort of functionality, and more so, integrated with something like Google Glass. Yes, I understand that initial privacy concerns will mute the creepier possibilities, but it’s likely a matter of time before someone enables functionality which is initially forbidden. Google won’t have a monopoly on this technology indefinitely. In terms of interpersonal relationships one could easily imagine artificial intelligence which is optimized toward tracking the eye movements of others and constantly outputting a stream of analytics toward the end user. As a concrete example consider a woman who is aware that her significant other has a history of straying. In social situations she could simply turn on a “head tracker” which would generate a frequency distribution over time of who he was looking at over the course of an hour at a party. Of course this sort of thing occurs intuitively and ad hoc already, but with the raw data recorded one might be able to generate much more powerful, persuasive, and incriminating inferences. This might elicit an adaptive response on the part of her boyfriend, but that might be the aim in the first place!
Starting ~10,000 years ago mankind took the step which introduced us to the world of privacy. Rather than small to medium sized bands and villages governed by something like Dunbar’s number we had the option of anonymity. In the 20th century urban life has allowed for a possibility of relative withdrawal from social contacts and connections, if one so chooses. Such a choice was not available to our ancestors, who were inextricably dependent upon their social network to buffer them from the vicissitudes of fate. The scenario which I’m outlining above, which I think is highly likely, does not correspond in the details to the ancient villages. Rather, the modern global village expands the scope of those in your potential social network up to whole world.
The reality is of course that you won’t know billions and billions. But in crowded urban societies you’ll have access to personal details and information on nearly everyone you interact with, which may run into tens of thousands per year, which is greater by orders of magnitude from the low hundred posited for our ancestors. We shall adapt, I have no doubt about that, but how is the question. I envisage that some will become “privacy Amish,” creating retro communities bound together by the possibility of anonymity. For the majority there will emerge new rules and norms as to what is gauche and what is polite. Interesting times.
Today I was missing my daughter, so I decided to Skype with her on my phone. The phone has a camera which can record video, so I can talk to her, and if she gets bored I’ll show her something besides my face. I take this for granted, but it is interesting to reflect that my “video phone” is actually just a regular phone on which I installed a third party application to enable two way video calls. It’s a banal and marginal use for the device. Information technology is far more ubiquitous than the occasional video conference.
Over at The Atlantic there’s a piece up, The Touch-Screen Generation, which channels some of the moral panic sweeping across this nation. Some of this panic may be justified, but as noted in the article this is something we’ve all seen before, all the way back to Plato and his fellow travelers worrying about the effect of literacy upon memory. And as parents we’ll have to set ground rules and guide lines. Because of the social and networked nature of modern technology I doubt that we’re in any danger of raising up a generation of Solarians.
One aspect of the piece in The Atlantic which is not excessively emphasized is that not all children are the same. Some children take to books very early, and some never take to books. This may be dispositional or situational, but it is nevertheless critical in determining future life trajectory. The small initial differences between young children in their information content diverges radically as the readers continue to absorb and expand their knowledge base at a far higher rate than non-readers. Modern information technology has the potential to widen these gaps. While some children might peruse Wikipedia for hours on end, others could wile away the hours on video sites.
With the imminent demise of Google Reader there’s a lot of talk about how this is a death blow for RSS. I don’t really get this. Does anyone remember the stuff about “the death of comments” in the late 2000s? E.g.:
It’s sad and disappointing but the death of blog comments may be near. It’s getting harder and harder to fight against the hordes of spammers and mediocrity and animosity out there.
That’s from 2007. Granted, many blogs and media organizations have worthless comments sections. But not all by any stretch. And arguably technology like Disqus has made comments more, not less, relevant, due to features like “up voting” (I’m aware that Slashdot had this a long time ago!). Around the same time there was also the “death of email”. Like blog comments, email is still around.
The question is why? Because these formats have their own role in the information ecology. If you want to send a short, informal, missive to your friends now Facebook offers you an alternative to email. But if you want to send a longer formal message to a co-worker email is usually preferable (do you really want your boss to know your Facebook account?). Similarly, comments serve a particular function of public discussion which is important enough that defenses were developed by firms against their abuse.
For now you can find my feed at http://feeds.feedburner.com/GeneExpressionBlog (Google might shut down FeedBurner at some point). Here are some alternatives to Google Reader. And if you want to know why RSS matters, here’s another article. One issue with regards to RSS as I understand it is that the average web user isn’t too familiar with it. In contrast, someone like me who is a heavy information consumer finds it indispensable. So even if the RSS format dies, I’m pretty sure there would be applications which specialized in scraping data from websites and organizing it in an RSS-like manner.
One of the topics that occasionally crops up in personal conversations with friends is the issue of the rate of technological change. And yet the more and more I live life the more I feel that many of these discussions are predicated on the punctuated and precise emergence of technologies at a specific time and point (e.g., the web in 1995). And yet consider the “smart phone,” or more accurately, the phone as we understand it today. When the iPhone came out it was criticized for not being quite so radical or revolutionary, and I think the idea of the smart phone with a data plan has transformed the way we live our lives. It’s just not as sexy as more salient technologies. Sometimes there are even technologies which are obviously radical, but whose importance seems to bleed into our lives. Within the next 5 years I assume that civilian “drones” will become ubiquitous and banal, whether we like them or not.
The rise of drones have the potential for radically centralizing power and control. 3-D printing on the other hand pushes in the other direction. The apotheosis of this idea is a firm called Defcad, which made a splash at South by Southwest. Defcad emerged out of conflicts in the “Maker” subculture. Below is the introductory video of the founder:
Are you exhilarated? Or are you creeped out? Ultimately it may not matter. We’ve been waiting for the “future” to hit us since the 1950s. Perhaps the first quarter of the 21st century will usher in the radical transformations which have been the purview of science fiction for nearly a century.
After my last post on inevitable nature of the shift of the book toward electronic formats, I revisited the data which highlights the decline in sales of e-readers. Some of this is probably competition with tablets. But I’ve had the same Kindle for two and a half years. I got a newer version of the Kindle for my wife, but have seen no need for me to upgrade (and, I got a Kindle Fire for my daughter). Why? The point of e-readers is the content, not the delivery. This reiterates that “e-books” aren’t revolutionary, they’re evolutionary, and the fixation on technology is going to be transient. A true revolution in information transmission and delivery would be a direct data port, which would transform “publishing” in a much deeper fashion than the digitization of type and script.
I’ve had a Kindle for a few years now. I read a lot on it. And yet I observed something recently: I’ve stopped going to the library much. This is a big deal for me…probably since the age of 7 I’ve clocked in at least one visit to the public library per week in my life. I never turn books in past due because of the frequency with which I patronize the public or university libraries which I’ve had access to in life. Until recently. Now on occasion books go overdue, because I don’t go very often.
In the short term the Kindle has been a boon. But I’m not sure if it’s good for us in the long term. I’d rather pay more for a device which allowed for easier usage of different formats, as well as looser distribution policies.
Even Twitter? Can Twitter be declining? Over at the Atlantic‘s Technology Channel I note that my own Twitter conversations are not quite as dynamic as they once were, and speculate about why that might be. I didn’t say this in the post, but I wonder whether it might have something to do with people who enjoy online conversations also enjoying new tools and toys: perhaps we get tired of Twitter not because it has a deficiency, but just because it’s been around a while. I’m not suggesting this in lieu of the explanations I offer there, but in addition to them.
I think this is an artifact of the fact that Alan Jacobs seems to have been a very early Twitter adopter. Here’s Google Trends for the USA for searches for Twitter:
Last weekend I was at the Singularity Summit for a few days. There were interesting speakers, but the reality is that quite often a talk given at a conference has been given elsewhere, and there isn’t going to be much “value-add” in the Q & A, which is often limited and constrained. No, the point of the conference is to meet interesting people, and there were some conference goers who didn’t go to any talks at all, but simply milled around the lobby, talking to whoever they chanced upon.
I spent a lot of the conference talking about genomics, and answering questions about genomics, if I thought could give a precise, accurate, and competent answer (e.g., I dodged any microbiome related questions because I don’t know much about that). Perhaps more curiously, in the course of talking about personal genomics issues relating to my daughter’s genotype came to the fore, and I would ask if my interlocutor had seen “the lion.” By the end of the conference a substantial proportion of the attendees had seen the lion.
This included a polite Estonian physicist. I spent about 20 minutes talking to him and his wife about personal genomics (since he was a physicist he grokked abstract and complex explanations rather quickly), and eventually I had to show him the lion. But during the course of the whole conference he was the only one who had a counter-response: he pulled up a photo of his 5 children! Touché! Only as I was leaving did I realize that I’d been talking the ear off of Jaan Tallinn, the lead developer of Skype . For much of the conference Tallinn stood like an impassive Nordic sentinel, engaging in discussions with half a dozen individuals in a circle (often his wife was at his side, though she often engaged people by herself). Some extremely successful and wealthy people manifest a certain reticence, rightly suspicious that others may attempt to cultivate them for personal advantage. Tallinn seems to be immune to this syndrome. His manner and affect resemble that of a graduate student. He was there to learn, listen, and was exceedingly patient even with the sort of monomaniacal personality which dominated conference attendees (I plead guilty!).
At the conference I had a press pass, but generally I just introduced myself by name. But because of the demographic I knew that many people would know me from this weblog, and that was the case (multiple times I’d talk to someone for 5 minutes, and they’d finally ask if I had a blog, nervous that they’d gone false positive). An interesting encounter was with a 22 year old young man who explained that he stumbled onto my weblog while searching for content on the singularity. This surprised me, because this is primarily a weblog devoted to genetics, and my curiosity about futurism and technological change is marginal. Nevertheless, it did make me reconsider the relative paucity of information on the singularity out there on the web (or, perhaps websites discussing the singularity don’t have a high Pagerank, I don’t know).
I also had an interesting interaction with an individual who was at his first conference. A few times he spoke of “Ray,” and expressed disappointment that Ray Kurzweil had not heard of Bitcoin, which was part of his business. Though I didn’t say it explicitly, I had to break it to this individual that Ray Kurzweil is not god. In fact, I told him to watch for the exits when Kurzweil’s time to talk came up. He would notice that many Summit volunteers and other V.I.P. types would head for the lobby. And that’s exactly what happened.
There are two classes of reasons why this occurs. First, Kurzweil gives the same talks many times, and people don’t want to waste their time listening to him repeat himself. Second, Kurzweil’s ideas are not universally accepted within the community which is most closely associated with Singularity Institute. In fact, I don’t recall ever meeting a 100-proof Kurzweilian. So why is the singularity so closely associated with Ray Kurzweil in the public mind? Why not Vernor Vinge? Ultimately, it’s because Ray Kurzweil is not just a thinker, he’s a marketer and businessman. Kurzweil’s personal empire is substantial, and he’s a wealthy man from his previous ventures. He doesn’t need the singularity “movement,” he has his own means of propagation and communication. People interested in the concept of the singularity may come in through Kurzweil’s books, articles, and talks, but if they become embedded in the hyper-rational community which has grown out of acceptance of the possibility of the singularity they’ll come to understand that Kurzweil is no god or Ayn Rand, and that pluralism of opinion and assessment is the norm. I feel rather ridiculous even writing this, because I’ve known people associated with the singularity movement for so many years (e.g., Michael Vassar) that I take all this as a given. But after talking to enough people, and even some of the more naive summit attendees, I thought it would be useful to lay it all out there.
As for the talks, many of them, such as Steven Pinker’s, would be familiar to readers of this weblog. Others, perhaps less so. Linda Avey and John Wilbanks gave complementary talks about personalized data and bringing healthcare into the 21st century. To make a long story short it seems that Avey’s new firm aims to make the quantified self into a retail & wholesale business. Wilbanks made the case for grassroots and open source data sharing, both genetic and phenotypic. In fact, Avey explicitly suggested her new firm aims to be to phenotypes what her old firm, 23andMe, is to genotypes. I’m a biased audience, obviously I disagree very little with any of the arguments which Avey and Wilbanks deployed (I also appreciated Linda Avey’s emphasis on the fact that you own your own information). But I’m also now more optimistic about the promise of this enterprise after getting a more fleshed out case. Nevertheless, I see change in this space to be a ten year project. We won’t see much difference in the next few I suspect.
The two above talks seem only tangentially related to the singularity in all its cosmic significance. Other talks also exhibited the same distance, such as Pinker’s talk on violence. But let me highlight two individuals who spoke more to the spirit of the Summit at its emotional heart. Laura Deming is a young woman whose passion for research really impressed me, and made me hopeful for the future of the human race. This the quest for science at its purest. No careerism, no politics, just straight up assault on an insurmountable problem. If I had to bet money, I don’t think she’ll succeed. But at least this isn’t a person who is going to expend their talents on making money on Wall Street. I’m hopeful that significant successes will come out of her battles in the course of a war I suspect she’ll lose.
The second talk which grabbed my attention was the aforementioned Jaan Tallinn’s. Jaan’s talk was about the metaphysics of the singularity, and it was presented in a congenial cartoon form. Being a physicist it was larded with some of the basic presuppositions of modern cosmology (e.g., multi-verse), but also extended the logic in a singularitarian direction. And yet Tallinn ended his talk with a very humanistic message. I don’t even know what to think of some of his propositions, but he certainly has me thinking even now. Sometimes it’s easy to get fixated on your own personal obsessions, and lose track of the cosmic scale.
Which goes back to the whole point of a face-to-face conference. You can ponder grand theories in the pages of a book. For that to become human you have to meet, talk, engage, eat, and drink. A conference which at its heart is about transcending humanity as we understand is interestingly very much a reflection of ancient human urges to be social, and part of a broader community.