The Life and Death of Blogs

Instead of the usual "sorry I haven't been posting" post, I thought I'd make you something special: my first infographic!

Infographic depicting post stats of the blogs I follow in my RSS reader. Important observation is that half of the blogs haven't updated in the last 86 days.

A while ago I noticed a strange phenomenon where the act of subscribing to a blog seemed to kill it; a disproportionate number of seemingly thriving blogs would suddenly cease to be updated right after I subscribed to them. Did I have the Touch of Death, or was this just part of the circle of life? Clearly, this is a burning question, as is impacts both my sanity and the health & safety of hundreds of innocent blogs across the blogosphere.

We've all heard through the grapevine that two thirds of blogs are abandoned, with a significant number of them having only a single post. (One of my papers cited this as from 2003, where "abandoned" is defined as having not updated for two months; somewhat ironically, the URL cited as the source no longer exists. You can find an overview of the findings here)

While some blogs convulse with obvious death throes near the end (a series of posts stating, "Sorry I haven't posted much lately, I promise I'll write more!"), many simply go dark with no warning. Some of my favourite now-dead blogs still happily display their last innocuous post. I admit I find this disturbing; I sort of feel as if blog services should overlay the homepage with a grey filter and epitaph. That people abandon projects is of no surprise to me, but the very nature of the Internet and blogs means that, while a failed knitting project can quietly collect dust in a corner of one's home, a failed blog is left to rot fully within view of everyone on the Internet.

A journal

Whenever I walk into a bookstore (or, more rarely, a store-that-sells-only-pens-and-paper. I don't know what those stores are called), I am always drawn to the shelves full of journals. There is so much variation in journals that I've always wanted an excuse to buy them: leather-bound, handmade, bright, cute, tiny, rough, delicious-smelling... there's something captivating about a beautiful cover bound over creamy lined pages.

Something irresistible.

I had a problem, though. Like many people, I've never been able to keep a diary or journal. It's a pretty common gift for a teenager - especially one that likes to read - and I had enough journal and diary sets to last a few decades... but I could never get beyond the first week or so. I've always found my voice to be weak and childish, doubly so when I can't go back and edit my words. The biggest problem is that very often I have nothing important to say. I spend my days doing quiet activities, alone... nothing that is journal-worthy. I have a scrapbook for the bits of things I want to remember, and I alternate photo pages with more "scrappy" pages that feature life's debris: ticket stubs, pamphlets, doodles, receipts. For a long time that was good enough of a souvenir from my past.

When scrapbooking wasn't enough, I started blogging and tweeting. But, eventually I found that there were things in my life that were too long to tweet and too trivial to blog, and I started to see where a journal might possibly fit in my life.

[Commonplace book], [mid. 17th c.]
[Commonplace book], [mid. 17th c.] by Beinecke Flickr Laboratory

This year has conspired to get me to start journaling. In reading annotated works of Lovecraft, I am again reminded of the "commonplace book," to which I was first introduced via A Series of Unfortunate Events. It's a fantastic idea, really - just a book where you keep writings, sketches, quotes and ideas as they occur throughout the day. A queer sense of longing...

A brief intro to algorithmic time complexity

The other day I was struck by the thought that something that is extremely obvious to computer scientists may, in fact, be completely unintuitive to other folks. It was this:

The running time of an algorithm does not necessarily scale linearly to the size of the input

That is, if a program*see note below takes one minute to perform some function on one thousand items, it is not guaranteed that it will take ten minutes to perform this function on ten thousand items - it could take much, much longer.


*Here, I am actually talking about algorithms, not programs. An algorithm is a step-by-step outline of how to solve a problem - sort of like a recipe. It is not anything like the actual program you'd use to solve the problem. We use algorithms every day without ever applying that term to them! Every time you follow a recipe, a guide or tutorial, a process or routine, you're following an algorithm to accomplish a task.


In everyday life, we are used to dealing with linear relationships: if it takes one hour to drive a hundred kilometers, it will take two hours to drive two hundred kilometers; if it takes a day to read a 200 page novel, it will take a week to read a 1400 page book. As one factor increases, the other factor increases by a constant, linear proportion. (e.g. one hour per 100 kilometers; one day per 200 pages) Linearity is very natural, and we conceptualize it easily.

There are, however, many other kinds of relationships. It is possible - and quite common - to have an algorithm that has a quadratic relationship between the size of input and the time taken to run. That is, for every n-fold increase in the size of input, there is a n2-fold increase in the time it takes! Some other possible relationships between runtime and input-size are cubic (n3), factorial (n!), exponential (2n, 3n...), and logarithmic (logn). (See "Examples," below)

This relationship between runtime and input size is called the time complexity of an algorithm. We use the time complexity as a way to compare the performance of different algorithms. An algorithm could run very quickly when it performs a function on only one hundred items, but its runtime could quickly balloon out to unreasonable amounts of time when given a million items. When evaluating the time complexity of an algorithm, we do not care about the exact amount of time it takes an algorithm to run; at no point do we make any actual time measurements. The only thing that comes close to being "counted" is the number of instructions executed, and even that is a very fuzzy "count." What we're interested in is the growth rate of the runtime when compared to the size of input.
Copyright 2012 Phile not Found. See About
Powered by Blogger

"Whenever you find that you are on the side of the majority, it is time to pause and reflect."