London to Toronto

First thoughts on arrival in TO:

1. BA is right that the 787 Dreamliner is much quieter than earlier jets, and the air seemed fresher etc. BA critics are right that the leg-room is poor.

2. Rapid transit to Pearson will start in May, not a moment too soon.

3. Presto smart card system will finally get rolled out across,TTC this year, not a moment too soon.

4. My sixth time in Toronto, but my first for an extended stay. As in London, palpable sense of economic and population growth, transit and housing issues to the fore. Immigration is a key driver of economic growth and vice versa.

5. I walked around West Queen Street West on Sunday, a classic gentrification moving frontier and according to Vogue (who am I to disagree?) the second coolest neighbourhood on the planet.

6. This city takes food and eating seriously.

7. Sunday afternoon was sunny and relatively warmer. At the first sign of better weather, Toronto residents are out in the parks and on porches, making the absolute best of it, a great (and I understand Canadian) characteristic.

Big Data and Cities – “Nobody’s good at this”

This article by John Lorinc in the Globe and Mail sets out in a balanced way both the opportunities and the potential pitfalls of applying Big Data to a host of urban issues. Lorinc sees New York, Chicago and Boston as leading the pack, driven forward by activist Mayors. He also looks at how Toronto and other Canadian cities are beginning to get into the game, while arguing that there is some catching up to do. Lorinc quotes Professor Stephen Goldsmith from Harvard’s Kennedy School of Government – with whom I shared a panel at the Smart Cities Expo in Barcelona last year – as saying that the application and intelligent analysis of data torepresents a sea change in thinking that could rival the shift to professional municipal management that marked the dawn of the Progressive Era over a century ago:  “Whenever we’re talking about data, we’re talking about modernizing how government works”.

Smart Cities needs smart clients – that is, we have to define the problem we are trying to solve, make best use of the data and tools we already use, and develop solutions from the bottom up before rushing to would-be top down comprehensive solutions.

Lorinc refers to the observations of Mike Flowers, formerly head of Data Analytics for Mayor Bloomberg in New York City and now with the Centre for Urban Science and Progress (CUSP) in New York.

Mr. Flowers points out that city officials shouldn’t be tempted to blindly make huge investments in “smart city” information technology in order to foster such insights. Indeed, his group relied on off-the-shelf spreadsheets to compile the data that led to New York’s dramatic analytics breakthroughs…While most municipalities in recent years have released large tranches of raw information – road-closure locations, transit schedules, and other intelligence – through so-called “open data” portals, the game-changing potential lies in interpreting those mountains of quotidian facts and finding new ways of putting them together. The analytics is equal parts art and science. As Mr. Flowers says, “Nobody’s good at this.”

This is very much the same as the approach we hope to develop in London through the Smart London Plan – ambitious in vision and scope, but realistic and sceptical (in the best sense of that word) in terms of next steps and above all conscious of the need to engage with and respond to citizens.

The Automatic Statistician: helping people make sense of their data

The fascinating site “The Automatic Statistician” gives a glimpse into the near-future where not just data analysis, but report-writing and conclusion-drawing will become a shared activity between humans and machines.

The creators of the site explain their purpose thus:

Making sense of data is one of the great challenges of the information age we live in. While it is becoming easier to collect and store all kinds of data, from personal medical data, to scientific data, to public data, and commercial data, there are relatively few people trained in the statistical and machine learning methods required to test hypotheses, make predictions, and otherwise create interpretable knowledge from this data. The Automatic Statistician project aims to build an artificial intelligence for data science, helping people make sense of their data.

The current version of the Automatic Statistician is a system which explores an open-ended space of possible statistical models to discover a good explanation of the data, and then produces a detailed report with figures and natural-language text. While at Cambridge, James Lloyd, David Duvenaud and Zoubin Ghahramani, in collaboration with Roger Grosse and Joshua Tenenbaum at MIT, developed an early version of this system which not only automatically produces a 10-15 page report describing patterns discovered in data, but returns a statistical model with state-of-the-art extrapolation performance evaluated over real time series data sets from various domains. The system is based on reasoning over an open-ended language of nonparametric models using Bayesian inference.

Kevin P. Murphy, Senior Research Scientist at Google says: “In recent years, machine learning has made tremendous progress in developing models that can accurately predict future data. However, there are still several obstacles in the way of its more widespread use in the data sciences. The first problem is that current Machine Learning (ML) methods still require considerable human expertise in devising appropriate features and models. The second problem is that the output of current methods, while accurate, is often hard to understand, which makes it hard to trust. The “automatic statistician” project from Cambridge aims to address both problems, by using Bayesian model selection strategies to automatically choose good models / features, and to interpret the resulting fit in easy-to-understand ways, in terms of human readable, automatically generated reports. This is a very promising direction for ML research, which is likely to find many applications at Google and beyond.”

The project has only just begun but we’re excited for its future. Check out our example analyses to get a feel for what our work is about.

Football managers, attention spans and economic growth

I have only just caught up with Katie Allen’s blog on the Guardian website reflecting on a speech by Andy Haldane, the Bank of England’s Chief Economist to students at UEA earlier this week. Katie Allen focused on one aspect of Andy Haldane’s speech – whether shorter attention spans threaten economic growth:

“We are clearly in the midst of an information revolution, with close to 99% of the entire stock of information ever created having been generated this century. This has had real benefits. But it may also have had cognitive costs. One of those potential costs is shorter attention spans,” Haldane told the University of East Anglia.

“Some societal trends are consistent with that. The tenure of jobs and relationships is declining. The average tenure of Premiership football managers has fallen by one month per year since 1994. On those trends, it will fall below one season by 2020. And what is true of football is true of finance. Average holding periods of assets have fallen tenfold since 1950. The rising incidence of attention deficit disorders, and the rising prominence of Twitter, may be further evidence of shortening attention spans.”

If there is a shift to short-termism, then innovation and hence growth may be at risk. Both Haldane and Allen point to a fascinating issue. But I also wonder if the era of Big (and even Bigger) Data might eventually change our attitudes towards what kind of decisions can be left to humans and humans alone, and which decisions must necessarily be delegated to, or at least shared with machines. In 25 or 50 or 75 years, our descendants might be amazed not just that we let human beings drive cars and fly planes, but also picked stocks and bought and sold bonds – all on their own.

Haldane’s speeches are always interesting, and – if I am not distracted by something on my smartphone – I intend to read the full speech shortly.

Cycle power

Films such as ‘The theory of everything” and TV series such as ‘Grantchester’ imply that Cambridge is cycle heaven. And indeed, there are a lot of people on bikes; Cambridge has I believe the highest proportion of journeys to work on two wheels. But the infrastructure and traffic management lags the reality by a long way. While cycling across Parkers Piece or Midsummer Common is undoubtedly a joy, most city cycling in Cambridge involves the familiar mix of potholes, disappearing cycle lanes and an apparently mystical belief in the efficacy of faded white lines on Tarmac to provide a cycle-friendly city.

This may be now changing. The’DNA cycle path’ South from Addenbrookes offers great views as well as a path surface whose colours reference the four nucleotides of the BRCA2 gene, and the cycle way alongside the guided busway is another fast convenient route into and out of the city. And in the last few weeks work has begun on a segregated cycle lane along Hills Road. It’s only about a mile and will terminate at the Hills Road bridge – but it’s a start.

Promoting cycling and walking is more than a transport policy; it is a way in which cities can say something about what sort of place they are, and what sort of place they want to be. These are not (just) quality of life issues; for cities like Cambridge (and London) their economic futures will depend on their ability to attract and retain ideas, investment- and people