A little whimsey for your trip sir?
I have 'an idea'.
And this is where it always starts, the linking and incubation of 'the idea' and the nurturing of the concept to see whether it is possible or not. Sometimes these ideas play out fabulously, like Poetry Pin and Storywalks, both of which have given me employment and many others enjoyment alongside a space to be creative in new ways. QR code poetry and Hydrophobic poetry (which I spotted a little while ago in the USA is similar) just a great concept mixing science and creativity with tons of potential for projects, workshops and user engagement.
"The impossible we can do, but miracles will take a little longer."
So, the new idea, which I will say from the outset will be more in the realms of miracle than the impossible has been brewing for about half a year, possibly a lot longer and comes from my love of podcasts, curious histories of place, GPS AI and loads of other stuff. But to get you to understand what I am proposing I have to tell you a short story. Last autumn I drove across country and my passenger, Carol Carey from Somerset Art Works, was telling me interesting things and notes about the places we were passing through and over.
'Thats the bridge the sea lion was found, you know the one which escaped from Longleat.'
And after many miles and similar tales all site specific and referencing the places we were passing through I thought, wouldn't it be great if you could have a sat nav which had a setting which could play short vox-pops about these locations. So over the next 300 miles (and needing to keep sharp at the wheel!) I began to feel through the project a little and fathom whether it was really possible at all.
So I noted down the steps, and popped it on the back burner that was until recently when a chance encounter provided one of the possible key elements and made a substantial step from just me rattling on.
Phase 1 - what are the ingredients and whats already out there.
Firstly, the GPS element; we know where we are and a GPS will give us a hard number which we can search about and around - street names within 1km, or churches within 3km, or village names, bridges, rivers, buildings, houses etc - these can all be searched by knowing your GPS. This gives us some tag words which we could theoretically then begin to link to content. This is all doable, and has been done - see Kate Pullingers' Breath a narrative story which responds to the locations close by when you open the web page. It uses your GPS and integrates it within the text of the tale along with time of day, current (live weather) its very simple and yet very clever too. So, I don't know how to do this, but it has been done through a web app, so ingredient number 1 is on the table.
Ingredient number 2 is relevant audio content, and as it happens I am working with Beaford Arts who have been running an Oral Histories project gathering hour upon hour of insight from North Devon residents. They have yards of the stuff and are keen to take it to new audiences, but how to flag the 'relevant content' amidst the great ramblings may perhaps be the biggest hurdle of the lot.
If we could transcribe the audio so that it can be searched then perhaps we could filter and flag the right segments. But the Beaford Arts resource is from multiple authors, all with different mic and interview styles, plus many of the interviewed have broad Devon accents! is this just too much of an ask? How many hours of slow manual transcription would be needed to flag the key words and relevant parts?
Well interestingly there is a new online software available which may begin to tackle this, it's called sonic.ai - which approaches transcription using artificial intelligence, and my tests show that it makes a pretty good stab at this audio. It also cut the content up into segments which it handily links to the text. So we would have transcriptions from lots of short pieces which can be searched for key words and then flagged, is this perhaps the first ticket to extracting key words from the transcribed audio? Or not (!) as it did stumble quite badly with place names, notably Hatherleigh being transcribed as 'have early'. This is no small hurdle but I wonder if we could reverse this system somehow and look for place names inside the text rather than it extracting them out incorrectly?
Well lets just say that we can do all the above, transcribe for key words / tags such as place names, then knock up an algorithm to filter and target relevant audio to feed back to the user. There are going to be massive hurdles beyond those I've listed (and some I already know) but this could be a blue print for a whimsey style satnav. Plus Arts Council England have been funding Oral Histories projects for some years, so scaling of this across the country could be tantalisingly possible too.
So here it is, in three bite sized pieces (!) but as I said earlier, a chance meeting has taken this a step closer to reality, as on 12th March I've been invited to a talk by Google at the European Space Agency Exeter, Beaford Arts will be joining us too and we will hopefully make some new connections and begin to build a team. There are grants available form the ESA to harness earth data (GPS - a key part in the project) so you never know, it may be that the impossible we can do after all, with miracles taking just a little longer!