Unrelated to all that 8/2 edition

Oh, how time flies.

Hitchhiking robot makes it to Philadelphia before getting dismembered because…Philly

Here is a before and after photo. Some enterprising /r/philadelphia redditors are hoping to find and repair him, but no one has a clue where he is (though there are other suggestions).

Dabbawalas: Mumbai’s lunchbox carriers

Studied by consultants and business schools for the secrets of their proclaimed near-flawless efficiency, the dabbawalas have been feted by British royals (Prince Charles) and titans of industry (Richard Branson) alike. Even FedEx, which supposedly knows something about logistics, has paid them a visit. In 2010, the Harvard Business Review published a study of the dabbawala system entitled “On-Time Delivery, Every Time”. In it, the authors asserted that the dabbawalas operate to Six Sigma standards even though they have few special skills, charge a minimal fee (around $10-$13 a month) and use no IT…

Rishi Khiani, a serial entrepreneur, speaks a different language. His office is all new India — swipe cards at the entrance, bright young things at open-plan desks and green tea for guests. Khiani has recently acquired a company called Meals on Wheels, which he has jazzed up with the name Scootsy and kitted out with brightly coloured motorbikes. His deliverymen, who will earn slightly more than dabbawalas, are armed with Android devices and an app that allows customers to follow their orders on their smartphones. “They’re giving us pings back on our CRM,” says Khiani, using the acronym for “customer relationship management” tool as he flips through his PowerPoint presentation. “That tells us where any person is at any given time.” Scootsy won’t just deliver takeaways from QSRs, he says, meaning quick service restaurants. It will soon branch out into other categories — groceries, flowers, electronic goods. “It’s masspirational,” he says. “We’re going to be the Uber for everything.”

The man who studies everyday evil

The “bug crushing machine” offered the perfect way for Paulhus and colleagues to test whether that reflected real life behaviour. Unknown to the participants, the coffee grinder had been adapted to give insects an escape route – but the machine still produced a devastating crushing sound to mimic their shells hitting the cogs. Some were so squeamish they refused to take part, while others took active enjoyment in the task. “They would be willing not just to do something nasty to bugs but to ask for more,” he says, “while others thought it was so gross they didn’t even want to be in the same room.” Crucially, those individuals also scored very highly on his test for everyday sadism.

Really the best introduction to machine learning/decision trees that you will find

Screen Shot 2015-07-30 at 11.39.55 AM

Tom Insel offers suggestions for what to do with all the data we will get from the brain

While we don’t have a unified field theory of the brain, some of the early projects in the BRAIN Initiative are providing models of how behavior emerges from brain activity. One of the first grants issued by the BRAIN Initiative supported scientists at NIMH and the University of Maryland to understand how the activity of individual neurons is integrated into larger patterns of brain activity. This work builds on the observation that in nature, order sometimes emerges out of the chaos of individual interacting elements.2

His main suggestion: ¯\_(ツ)_/¯

This is what it will look like when nature reclaims a city

CLIE9DzW8AA1PVQ

 

 

Rethinking fast and slow

Everyone except homo economicus knows that our brains have multiple processes to make decisions. Are you going to make the same decision when you are angry as when you sit down and meditate on a question? Of course not. Kahneman and Tversky have famously reduced this to ‘thinking fast’ (intuitive decisions) and ‘thinking slow’ (logical inference) (1).

Breaking these decisions up into ‘fast’ and ‘slow’ makes it easy to design experiments that can disentangle whether people use their lizard brains or their shiny silicon engines when making any given decision. Here’s how: give someone two options, let’s say a ‘greedy’ option or an ‘altruistic’ option. Now simply look at how long it takes them to to choose each option. Is it fast or slow? Congratulations, you have successfully found that greed is intuitive while altruism requires a person to sigh, restrain themselves, think things over, clip some coupons, and decide on the better path.

This method actually is a useful way of investigating how the brain makes decisions; harder decisions really do take longer to be processed by the brain and we have the neural data to prove it. But there’s the rub. When you make a decision, it is not simply a matter of intuitive versus deliberative. It is also how hard the question is. And this really depends on the person. Not everyone values money in the same way! Or even in the same way at different times! I really want to have a dollar bill on me when it is hot, humid, and I am front of a soda machine. I care about a dollar bill a lot less when I am at home in front of my fridge.

So let’s go back to classical economics; let’s pretend like we can measure how much someone values money with a utility curve. Measure everyones utility curve and find their indifference – the point at which they don’t care about making one choice over the other. Now you can ask about the relative speed. If you make each decision 50% of the time, but one decision is still faster then you can say something about the relative reaction times and ways of processing.

dictator game fast and slow

And what do you find? In some of the classic experiments – nothing! People make each decision equally as often and equally quickly! Harder decisions require more time, and that is what is being measured here. People have heterogeneous preferences, and you cannot accurately measure decisions without taking this into account subject by subject. No one cares about the population average: we only care what an individual will do.

temporal discounting fast and slow

But this is a fairly subtle point. This simple one-dimensional metric – how fast you respond to something – may not be able to disentangle the possibility that those who use their ‘lizard brain’ may simply have a greater utility for money (this is where brain imaging would come in to save the day).

No one is arguing that there are not multiple systems of decision-making in the brain – some faster and some slower, some that will come up with one answer and one that will come up with another. But we must be very very careful when attempting to measure which is fast and which is slow.

(1) this is still ridiculously reductive but still miles better than the ‘we compute utility this one way’ style of thinking

Reference

Krajbich, I., Bartling, B., Hare, T., & Fehr, E. (2015). Rethinking fast and slow based on a critique of reaction-time reverse inference Nature Communications, 6 DOI: 10.1038/ncomms8455

What we talk about when we talk about color

6700_32fcc8cfe1fa4c77b5c58dafd36d1a98

The most recent issue of Nautilus focuses on color. As everyone who was anywhere near the internet in February knows, color is not some immutable property of the world but is instead produced through perception. Color perception is a reaction to the world we experience – but then this feeds back to shape the world itself:

Minions are the first animated characters to have their own Pantone color. Why aren’t there others?

You have to be cautious of cultural trends and meaning: They change; they are mutable. Some colors, like Chinese Red, will forever be seen in the cultural view. But today, even Chinese Red is not as pervasive as it was before. If you go to China now, people are wearing very westernized colors and clothes because that is what has currency now. When it comes to the world of entertainment, these trends change far more quickly. Several years ago, we were naming a purple, and Barney the dinosaur [from the children’s television show Barney & Friends] was very popular. Someone said, “Let’s name it Barney! It’s such a popular show, everyone will recognize it.” And I said, “I don’t know if you want to go there. What you need to consider is, 15 years from now, will people know what Barney Purple is?” If kids no longer watch Barney, that puts a “datedness” on the color.

Do people’s responses to color ever surprise you?

A great example is the color brown. Years ago, in word association tests, when we showed people different browns they would invariably say “it is dirty,” “it is soiled,” “it is unpleasant.” But then came what I like to call the Starbucks Phenomenon. Suddenly brown evokes some exotic coffee drink many of us are committed to on a daily basis.

Despite blue representing sadness in the anglosphere, it is still generally a positive color:

“For blue, most of the things that we associate with it are positive,” she says, including clear skies and clean water. Schloss found this to be the case in the United States, Japan, and China. In their U.S. study for example, 72 participants were shown a color on a neutral background and were asked to write as many descriptions of objects that were typically associated with the color before them. Then 98 different participants were shown each of the 222 descriptions written in black text on a white background, and they were asked to rank how positively or negatively they felt about each. Finally, 31 new participants were shown the color along with the associated description, and asked to rate the strength of the match between the color and description. Schloss and her colleagues found a strong, positive association between blue and clean water. Other studies support this finding. A 2013 study found major cultural differences in the color associations among the British and the Himba, a semi-nomadic group in Namibia. Yet these groups still associated blue and clean water.

Here is a philosopher’s take on color (read through for some of the history and the realist and anti-realist arguments). Hint: she thinks vision is inference.

My response is to say that colors are not properties of objects (like the U.N. flag) or atmospheres (like the sky) but of perceptual processes—interactions which involve psychological subjects and physical objects. In my view, colors are not properties of things, they are ways that objects appear to us, and at the same time, ways that we perceive certain kinds of objects. This account of color opens up a perspective on the nature of consciousness itself.

Indeed, I argue, colors are not properties of minds (visual experiences), objects or lights, but of perceptual processes—interactions that involve all three terms. According to this theory, which I call “color adverbialism,” colors are not properties of things, as they first appear. Instead, colors are ways that stimuli appear to certain kinds of individuals, and at the same time, ways that individuals perceive certain kinds of stimuli. The “adverbialism” comes in because colors are said to be properties of processes rather than things. So instead of treating color words as adjectives (which describe things), we should treat them as adverbs (which describe activities). I eat hurriedly, walk gracelessly, and on a fine day I see the sky bluely!

I have posted many of these before, but here are some pictures I have collected showing how color is affected by culture:

Words affect the colors we talk about

color words by language

Crayola is trying to name all of the colors

EV24xuD

Color in movie posters through time

nogrey.colour.nolight.nosat.all

Color in paintings through time

tumblr_inline_nm74nb7UCJ1traviy_1280

The living building

Buildings – even the most cement-filled – are organic; they change through interaction with the parasites that infest them (us, mostly). How often do architects consider this? Ask any scientist who moves into a new laboratory building and you’ll be met with eyerolls and exasperated stories. The new neuroscience institute that I work in is fantastic in many ways, but has some extremely puzzling features such as the need to repeatedly use an ID card to unlock almost every door in the lab. This is in contrast to my previous home of the Salk Institute which was a long open space separated only by clear glass allowing free movement and easy collaboration.

I mostly mention this because the video above – on How Buildings Learn – has a fantastic story at the beginning about MIT’s famous media lab:

I was at the Media Lab when it was brand new. In the three months I was there, the elevator caught fire, the revolving door kept breaking, every doorknob in the building had to be replaced, the automatic door-closer was stronger than people and had to be adjusted, and an untraceable stench of something horrible dead filled the lecture hall for months. This was normal.

In many research buildings, a central atrium serves to bring people together with open stairways, casual meeting areas, and a shared entrance where people meet daily. The Media Lab’s entrance cuts people off from each other; there are three widely separated entrances each huge and glassy; three scattered elevators; few stairs; and from nowhere can you see other humans in the five story space. Where people might be visible, they are carefully obscured by internal windows of smoked glass.

Rationality and the machina economicus

Science magazine had an interesting series of review articles on Machine Learning last week. Two of them were different perspectives of the exact same question: how does traditional economic rationality fit into artificial intelligence?

At the core of much AI work are concepts of optimal ‘rational decision-makers’. That is, the intelligent program is essentially trying to maximize some defined objective function, known economics as maximizing utility. Where the computer and economic traditions diverge is in their implementation: computers need algorithms, and often need to take into account non-traditional resource constraints such as time, whereas in economics this is left unspecified outside of trivial cases.

economics of thinking

How can we move from the classical view of a rational agent who maximizes expected utility over an exhaustively enumerable state-action space to a theory of the decisions faced by resource-bounded AI systems deployed in the real world, which place severe demands on real-time computation over complex probabilistic models?

We see the attainment of an optimal stopping time, in which attempts to compute additional precision come at a net loss in the value of action. As portrayed in the figure, increasing the cost of computation would lead to an earlier ideal stopping time. In reality, we rarely have such a simple economics of the cost and benefits of computation. We are often uncertain about the costs and the expected value of continuing to compute and so must solve a more sophisticated analysis of the expected value of computation.

Humans and other animals appear to make use of different kinds of systems for sequential decision-making: “model-based” systems that use a rich model of the environment to form plans, and a less complex “model-free” system that uses cached values to make decisions. Although both converge to the same behavior with enough experience, the two kinds of systems exhibit different tradeoffs in computational complexity and flexibility. Whereas model-based systems tend to be more flexible than the lighter-weight model-free systems (because they can quickly adapt to changes in environment structure), they rely on more expensive analyses (for example, tree-search or dynamic programming algorithms for computing values). In contrast, the model-free systems use inexpensive, but less flexible, look-up tables or function approximators.

That being said, what does economics have to offer machine learning? Parkes and Wellman try to offer an answer and basically say – game theory. Which is not something that economics can ‘offer’ so much as ‘offered a long, long time ago’. A recent interview with Parkes puts this in perspective:

Where does current economic theory fall short in describing rational AI?

Machina economicus might better fit the typical economic theories of rational behavior, but we don’t believe that the AI will be fully rational or have unbounded abilities to solve problems. At some point you hit the intractability limit—things we know cannot be solved optimally—and at that point, there will be questions about the right way to model deviations from truly rational behavior…But perfect rationality is not achievable in many complex real-world settings, and will almost surely remain so. In this light, machina economicus may need its own economic theories to usefully describe behavior and to use for the purpose of designing rules by which these agents interact.

Let us admit that economics is not fantastic at describing trial-to-trial individual behavior. What can economics offer the field of AI, then? Systems for multi-agent interaction. After all, markets are what are at the heart of economics:

At the multi-agent level, a designer cannot directly program behavior of the AIs but instead defines the rules and incentives that govern interactions among AIs. The idea is to change the “rules of the game”…The power to change the interaction environment is special and distinguishes this level of design from the standard AI design problem of performing well in the world as given.

For artificial systems, in comparison, we might expect AIs to be truthful where this is optimal and to avoid spending computation reasoning about the behavior of others where this is not useful…. The important role of mechanism design in an economy of AIs can be observed in practice. Search engines run auctions to allocate ads to positions alongside search queries. Advertisers bid for their ads to appear in response to specific queries (e.g., “personal injury lawyer”). Ads are ranked according to bid amount (as well as other factors, such as ad quality), with higher-ranked ads receiving a higher position on the search results page.

Early auction mechanisms employed first-price rules, charging an advertiser its bid amount when its ad receives a click. Recognizing this, advertisers employed AIs to monitor queries of interest, ordered to bid as little as possible to hold onto the current position. This practice led to cascades of responses in the form of bidding wars, amounting to a waste of computation and market inefficiency. To combat this, search engines introduced second-price auction mechanisms, which charge advertisers based on the next-highest bid price rather than their own price. This approach (a standard idea of mechanism design) removed the need to continually monitor the bid- ding to get the best price for position, thereby end- ing bidding wars.

But what comes across most in the article is how much economics needs to seriously consider AI (and ML more generally):

The prospect of an economy of AIs has also inspired expansions to new mechanism design settings. Researchers have developed incentive-compatible multiperiod mechanisms, considering such factors as uncertainty about the future and changes to agent preferences because of changes in local context. Another direction considers new kinds of private inputs beyond preference information.

I would have loved to see an article on “what machine learning can teach economics” or how tools in ML are transforming the study of markets.

Science also had one article on “trends and prospects” in ML and one on natural language processing.

References

Parkes, D., & Wellman, M. (2015). Economic reasoning and artificial intelligence Science, 349 (6245), 267-272 DOI: 10.1126/science.aaa8403

Gershman, S., Horvitz, E., & Tenenbaum, J. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines Science, 349 (6245), 273-278 DOI: 10.1126/science.aac6076

The Obama raise (for scientists)

My dad, being my dad, sent me an article claiming that Obama was about to change overtime rules so that more salaried workers will be covered. Would I get paid more? Psh, yeah right, I said. But then I looked a bit more closely and it wasn’t so clear. The new proposed rules state that anyone making under $50,400 $47,892 would be eligible for overtime (whereas previously the limit was a measly $24,000). That is: most postdocs will, technically, be eligible for overtime money if they work more than 40 hours per week.

So I decided to ask the Twitter hivemind and set off a predictable storm of WTF’s. The summary is: yes, it looks right now like postdocs will be eligible for overtime pay but there is a commentary period to propose exceptions to these rules (I don’t think graduate students will because they are “students”). No, no one thinks this will actually end up happening; somehow the NIH/NSF will make postdocs exempt from these rules (see a bit more here). Here are the full proposed rules. If you have opinions about these rules, please send in comments:

The Notice of Proposed Rulemaking (NPRM) published on July 6, 2015 in the Federal Register (80 FR 38515) and invited interested parties to submit written comments on the proposed rule atwww.regulations.gov on or before September 4, 2015. Only comments received during the comment period identified in the Federal Register published version of the NPRM will be considered part of the rulemaking record.

I was asked to do a storify of all the twitter responses but, uh, there were a lot of them and I wasn’t 100% paying attention. So here are some salient points:

  1. What are the job duties of a postdoc? Does going to a lecture count, or will that not count toward “work time” (if it does, do I get credit for reading a salient paper at home? At lab?)
  2. Is a fellow an employee, or are they different somehow? Is this technically a “salary”? This seems to be a point of confusion and came up repeatedly.
  3. calling PDs “trainees” while also claiming them as exempt “learned professionals” is a joke.’
  4. This may increase incentive to train PhDs and decrease incentive to hire postdocs (“For my lab, min PD cost would be $62k/yr, PhD cost $31k/yr all-in.”). Similarly, the influence may be most felt on small labs with less money, less on large “prestige” labs.

#1 is the most interesting question in general.

Functionally, if enforced at all (hmm), this would be functionally like a decrease in NIH/NSF/etc funding. But let’s face it, I think we can all agree that the most likely outcome here is an ‘exemption’ for postdocs and other scientists…

Edit: I meant to include this: currently in the overtime rules, there is a “learned professional” exemption that describes scientists – and is why they do not get overtime pay. In order to qualify for that exemption, there is some salary floor that they must make ($455 per week, or ~$23,660 per year). The new proposed rules will state:

In order to maintain the effectiveness of the salary level test, the Department proposes to set the standard salary level equal to the 40th percentile of earnings for full-time salaried workers ($921 per week, or $47,892 annually for a full-year worker, in 2013)

The NIH paylines are currently at $42,480 for a first year postdoc, increasing yearly, and passing this threshold in year 4. The fastest way to avoid overtime rules would be to simply bump up the first year salary to $47,893.

Because brains are packed with knowledge and are yummy, that’s why

Creamy brains

“Mr. Sheep Man,” I asked, “why would that old man want to eat my brains?”

“Because brains packed with knowledge are yummy, that’s why. They’re nice and creamy. And sort of grainy at the same time.”

“So that’s why he wants me to spend a month cramming information in there, to suck it up afterward?”

“That’s the idea.”

“Don’t you think that’s awfully cruel?” I asked. “Speaking from the suckee’s point of view, of course.”

“But, hey, this kind of thing’s going on in libraries everywhere, you know. More or less, that is.”

This news staggered me. “In libraries everywhere?” I stammered.

(The Strange Library, Haruki Murakami)

Oh hi! I am still alive, physically if not so much mentally. Research, fellowship applications, and the like got too much for me over the past few months. Hopefully I can resume my normal posting schedule? To keep all of your brains nice and creamy, of course.

Small autonomous drones

Nature has a fascinating review on drones – and especially microdrones!

microdrones

For those who don’t have access, here are some highlights (somewhat technical):

Propulsive efficiencies for rotorcraft degrade as the vehicle size is reduced; an indicator of the energetic challenges for flight at small scales. Smaller size typically implies lower Reynolds numbers, which in turn suggests an increased dominance of viscous forces, causing greater drag coefficients and reduced lift coefficients compared with larger aircraft. To put this into perspective, this means that a scaled-down fixed-wing aircraft would be subject to a lower lift-to-drag ratio and thereby require greater relative forward velocity to maintain flight, with the associated drag and power penalty reducing the overall energetic efficiency. The impacts of scaling challenges (Fig. 3) are that smaller drones have less endurance, and that the overall flight times range from tens of seconds to tens of minutes — unfavourable compared with human-scale vehicles.

There are, however, manoeuvrability benefits that arise from decreased vehicle size. For example, the moment of inertia is a strong function of the vehicle’s characteristic dimension — a measure of a critical length of the vehicle, such as the chord length of a wing or length of a propeller in a similar manner as used in Reynolds number scaling. Because the moment of inertia of the vehicle scales with the characteristic dimension, L, raised to the fifth power, a decrease in size from a 11 m wingspan, four-seat aircraft such as the Cessna 172 to a 0.05 m rotor-to-rotor separation Blade Pico QX quadcopter implies that the Cessna has about 5 × 1011 the inertia of the quadcopter (with respect to roll)…This enhanced agility, often achieved at the expense of open-loop stability, requires increased emphasis on control — a challenge also exacerbated by the size, weight and power constraints of these small vehicles.

microdrone flight vs mass

 

Improvements in microdrones will come from becoming more insect-like and adapting knowledge from biological models:

 

In many situations, such as search and rescue, parcel delivery in confined spaces and environmental monitoring, it may be advantageous to combine aerial and terrestrial capabilities (multimodal drones). Perching mechanisms could allow drones to land on walls and power lines in order to monitor the environment from a high vantage point while saving energy. Agile drones could move on the ground by using legs in conjunction with retractable or flapping wings. In an effort to minimize the total cost of transport, which will be increased by the additional locomotion mode, these future drones may benefit from using the same actuation system for flight control and ground locomotion…

Many vision-based insect capabilities have been replicated with small drones. For example, it has been shown that small fixed-wing drones and helicopters can regulate their distance from the ground using ventral optic flow while a GPS was used to maintain constant speed and an IMU was used to regulate roll angle. The addition of lateral optic flow sensors also allowed a fixed-wing drone to detect near-ground obstacles. Optic flow has also been used to perform both collision-free navigation and altitude control of indoor and outdoor fixed-wing drones without a GPS. In these drones, the roll angle was regulated by optic flow in the horizontal direction and the pitch angle was regulated by optic flow in the vertical direction, while the ground speed was measured and maintained by wind-speed sensors. In this case, the rotational optic flow was minimized by flying along straight lines interrupted by short turns or was estimated with on-board gyroscopes and subtracted from the total optic flow, as suggested by biological models

John Nash, 1928 – 2015

Sad news that John Nash was killed yesterday when his taxi crashed on its way back from the airport. He and his wife were ejected from the taxi when it ran into the lane divider.

Nash is most famous from his biopic A Beautiful Mind though obviously it is his intellectual contributions that you should know about.

His 30 page PhD thesis was what won him the Nobel Prize. His work on game theory was influential not just in economics, but psychology and ecology among other fields.

Recently declassified letters to the NSA show how Nash was foundational to modern cryptography and its reliance on computational complexity. This is the description he included in his letter:

Nash-transmittingarrangement

When he was killed, he was returning from Norway where he received the Abel prize for work on nonlinear partial differential equations.

He continued to publish; his final paper (afaik) was “The agencies method for coalition formation in experimental games

He also maintained (?) a delightfully minimalist personal web page.