Visualizing algorithms [Absolutely stunning]
Algorithms are a fascinating use case for visualization. To visualize an algorithm, we don’t merely fit data to a chart; there is no primary dataset. Instead there are logical rules that describe behavior. This may be why algorithm visualizations are so unusual, as designers experiment with novel forms to better communicate. This is reason enough to study them.
But algorithms are also a reminder that visualization is more than a tool for finding patterns in data. Visualization leverages the human visual system to augment human intellect: we can use it to better understand these important abstract processes, and perhaps other things, too.
In an increasingly complex and specialised world, Gigerenzer preaches a gospel of greater simplicity. He suggests that the outcome of decisions of any complexity – a complexity of, say, trying to organise a successful picnic or greater – are impossible to accurately predict with any mathematical rational model, and therefore more usefully approached with a mixture of gut instinct and what he calls heuristics, the learned rules of thumb of any given situation. He believes, and he has some evidence to prove it, that such judgments prove sounder in practice than those based purely on probability.
(see also: Instinct can beat rational thinking)
The goal of the NeuroElectro Project is to extract information about the electrophysiological properties (e.g. resting membrane potentials and membrane time constants) of diverse neuron types from the existing literature and place it into a centralized database.
The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. Algorithmic modeling, both in theory and practice, has developed rapidly in fields outside statistics. It can be used both on large complex data sets and as a more accurate and informative alternative to data modeling on smaller data sets.
So it’s pretty clear by now that statistics and machine learning aren’t very different fields. I was recently pointed to a very amusing comparison by the excellent statistician — and machine learning expert —Robert Tibshiriani. Reproduced here…
Hah. Or rather, ouch! I had two thoughts reading this. (1) Poor statisticians. Machine learners invent annoying new terms, sound cooler, and have all the fun. (2) What’s wrong with statistics? They have way less funding and influence than it seems they might deserve.
As pointed out in my previous post, there are dozens of ways to solve a given modeling problem. Each model assumes something different, and it’s not obvious how to navigate and identify which assumptions are reasonable. In industry, most practitioners pick the modeling algorithm they are most familiar with rather than pick the one which best suits the data. In this post, I would like to share some common mistakes (the don’t-s). I’ll save some of the best practices (the do-s) in a future post.
I started him off easy with a walk around an art gallery and then graduated to a simple undersea stroll. He was blown away by it. He has never been so amazed by a new technology he told me and this is a guy whose favorite programs as a kid were radio shows. I asked if he was ready to try something more intense and he said yes so I put him in the roller coaster. We recorded the results.
A glance at the most important campaign promises of the Best Party is more than enough to highlight the audacity of Reykjavik’s voters. They were promised free towels at swimming pools, a polar bear for the zoo, the import of Jews, «so that someone who understands something about economics finally comes to Iceland», a drug-free parliament by 2020, inaction («we’ve worked hard all our lives and want to take a well-paid four-year break now»), Disneyland with free weekly passes for the unemployed («where they can have themselves photographed with Goofy»), greater understanding for the rural population («every Icelandic farmer should be able to take a sheep to a hotel for free»), free bus tickets. And all this with the caveat: «We can promise more than any other party because we will break every campaign promise.»
We need a National Neurotechnology Initiative (NNTI) that requires $2B in yearly funding. Expanding the BRAIN Initiative to NNTI, we will achieve the goals of curing diseases and understanding brain function. Mapping the brain is just one step in the process. We need to supplement the NIH investment with funding from each major government agency. Interdisciplinary research is an integral part to success in this national endeavor. Additionally, we need to create a National Coordinating Office that will oversee the investments from other agencies to synchronize the research efforts. Without continued coordination, we will lose sight of what we cherish most – our health, our minds, and our future.
6. A growlery is a place you like to retire to when you’re unwell or in a bad mood. It was coined by Charles Dickens in Bleak House (1853).
28. In the 18th century, a clank-napper was a thief who specialized in stealing silverware.
30. 11% of the entire English language is just the letter E.
56. In mediaeval Europe, a moment was precisely 1/40th of an hour, or 90 seconds.
If we believe the stats, thinking about sex every seven seconds adds up to 514 times an hour. Or approximately 7,200 times during each waking day. Is that a lot? It sounds like a big number to me, I’d imagine it’s bigger than the number of thoughts I have about anything in a day. So, here’s an interesting question: how is it possible to count the number of mine, or anyone else’s thoughts (sexual or otherwise) over the course of a day?
According to Loewi’s account, one night in 1921 he fell asleep while reading. He then had a dream in which he visualized an experiment that could put an end to the debate over how nerves communicated with one another. He woke up in the middle of the night, scribbled some notes about this potentially groundbreaking experiment, and then fell back to sleep. To his great frustration, however, when he awoke again he couldn’t read the notes he had written.
Barricelli’s experiments had an aesthetic side, too. Uncommonly for the time, he converted the digital 1s and 0s of the computer’s stored memory into pictorial images. Those images, and the ideas behind them, would influence computer animators in generations to come. Pixar cofounder Alvy Ray Smith, for instance, says Barricelli stirred his earliest thinking about the possibilities for computer animation, and beyond that, his philosophical muse. “What we’re really talking about here is the notion that living things are computations,” he says. “Look at how the planet works and it sure does look like a computation.”
Rather than actually doing math, let’s think like economists. Picking the set R gives us a certain benefit, in the form of the power Q(R) , and a cost, tP(R) . (The ts term is the same for all R .) Economists, of course, tell us to equate marginal costs and benefits. What is the marginal benefit of expanding R to include a small neighborhood around the point x ? Just, by the definition of “probability density”, q(x) . The marginal cost is likewise tp(x) . We should include x in R if q(x)>tp(x), or q(x)/p(x)>t . The boundary of R is where marginal benefit equals marginal cost, and that is why we need the likelihood ratio and not the likelihood difference, or anything else. (Except for a monotone transformation of the ratio, e.g. the log ratio.) The likelihood ratio threshold t is, in fact, theshadow price of statistical power.
But Dr Glowacki, a Royal Society Research Fellow, was so overcome during the ‘Hallelujah Chorus’ he began lurching from side to side with his hands raised and whooping before attempting to crowd-surf, witnesses claimed.