The case for Nested 🐣 NFTs 🖼

Comment

The case for Nested 🐣 NFTs 🖼

Triggered by a post by Filippo Lorenzin, on the loss of edits and changes when creating with digital tools, I want to make the case for a specific kind of ‘Nested NFT’.

First of all, I don’t think it’s entirely true, although it does take an effort on behalf of the creator. Most digital creators implicitly document the creation process, like CMYK 😉 .

And others do it more publicly, like the artist SEA.WELL who documents the creation of most of his works on their social media pages:

Second, I think there’s an unique opportunity for NFTs and particularly ‘Nested NFTs’ – a ‘new protocol that lets you put digital assets inside your NFTs’ – created by Charged Particles, to preserve (part of) the creation process and even add to the ‘journey’ of an (art)work.

Let me explain. My personal interest in art is first and foremost visceral; is there an instant ‘click’. It usually translates to something abstract in primary colors. Also movement (real or perceived) is a big plus for me. But this is all initial. After the first impression I want to learn more. More about the process, technique, message, meaning. And then about the history: of the work, and the artist, even the collector(s) and curator(s). The totality of this is the full experience of the artwork in my opinion.

There’s nothing new here in the sense that the above is what art critics, galleries, museums, art books are all about, but they can and will only cover a very select subset of all the art out there. Especially in the light of decentralisation and the urge to bypass (some) gatekeepers it would be great to be able to, independently, document and preserve not just the characteristics, provenance and ownership of a work – that’s what the ‘initial’ NFT takes care of – but also the genesis, evolution (in the case of ‘living’ artworks) and derivatives or remixes of the artwork. In addition a record of the journey – think of curation in specific galleries/exhibitions/displays and coverage by magazines/blogs etc. – would add to the appreciation of a work.

Rafaël Rozendaal suggested this for the exhibition-part, referring to the labels on the backs of old paintings.

In the comments on that post people refer to a number of initiatives that cover this part already, most notably Verisart. They indeed do a great job in documenting part of the journey of an artwork on the blockchain.

Image courtesy of Verisart

I gave it a try with a work I own. I haven’t completed the whole verification part of the process yet (all that is documented now, is done by me as an owner), but at least some additional, relevant information about the artwork is ‘registered’ in combination with depictions of, descriptions of and references to the actual artwork.

So what else could you want to document in relation to an (art)work, especially in the form of a ‘Nested NFT’?

Let me explain with some examples.

Artwork evolution

First there is the unique case of depicting the evolution of artworks, for example the layered NFT artworks on Async Art. The art on this platform is so-called ‘programmable art’, in the sense that it can ‘evolve over time, react to its owners, or follow a stock price’. In the case of their layered artworks it means that different layers can have different states and each layer can also have different owners (who may or may not control the state). It results in an artwork that has a considerable amount of possible ‘end states’ and you basically can’t predict what it will look like at any given moment.

Animation of the states throughout 2021 of ‘Artificial Entity Probes’ by Bård Ionson

See for example ‘Artificial Entity Probes’ by Bård Ionson. You have to follow the link to see the current state, because shown above is an animation of the different ‘end states’ the work has had during the past year or so. I made the screenshots myself because I own “The Present Place” layer – which is basically the background image – and I have changed it a couple of times. But independently other owners changed their respective layers, resulting in all these different outcomes over time.

Now I don’t consider these screenshots to be valuable or art of any kind (especially since I didn’t add any artistic input, which would make it a derivative), but they do add to the appreciation of the original work in my opinion. So ‘adding it to’ or ‘nesting it in’ the original NFT would make sense, I think.

Of course, the same would be the case for any depiction or derivatives of a physical artwork, for example a picture taken. See for example the photos I took (and edited) of ‘Bouquet of Tulips’ by Jeff Koons.

Again; not anything close to the qualities, uniqueness or perceived value of the original artwork, but at least an attempt of an ‘artistic impression’ of the work. Which again could add to the overall appreciation of the work.

Artwork exhibitions

Second, there’s the case for documenting unique exhibitions in relation to an artwork. With digital works there’s even an additional factor, because the display options themselves (especially the size) can add to the experience of the artwork. See for example the list by Matt DesLauriers of ‘interesting ways digital and screen-based work can be exhibited’:

And again, the registration of these unique exhibitions (in the form of photos or videos) can be artistic by themselves. See for example the registrations of two exhibits of www.muchbetterthanthis.com by Rafaël Rozendaal - 2006:

The Worlds Biggest Kiss on Seoul Square by Rafaël Rozendaal for the New Museum - 2012 (follow the link for a video with soundtrack)

Midnight Moment by Rafaël Rozendaal for Times Square Arts - 2015

Also the NFT-scene has given a boost to online exhibitions. See for example Cryptofarian’s exhibition of ‘Dive’s’ called The EN.D:

Entrance to the ‘The EN.D’ exhibtion on Cyber.

The interesting part here – as it is with any multi-work exhibition – is the curation; which works are shown in what combination or configuration. It helps if the curator explains the reasoning, but the exhibit on its own already adds to the overall experience of an individual artwork.

Again, adding this particular experience (or at least a reference to it) – if possible – to the original work in the form of a ‘Nested NFT’ would be wonderful. Actually, if the exhibition uses NFTs as ‘access tokens’ or Proof-of-Attendance-Protocol ‘souvenir tokens’ this would make perfect sense.

Artwork derivatives

Thirdly, remixes or derivatives of the original artwork deserve special attention. “Imitation is the highest form of flattery” as the quote goes, and artist Max Capacity takes it to the next level. He created DOS Punks – which themselves are inspired by, but not affiliated to, the infamous Cryptopunks – that were consequently remixed by other collectors/artists and which he then re-remixed and called Fake DOS Punks:

Of course the fact that these works are part of a collection, which is recorded on the blockchain, already is a form of documentation and part of the experience. But if a specific remix of a specific artwork exists it would be great to ‘attach’ that work, or at least a reference to it, to the original.


The earlier mentioned Bård Ionson also created Fountain, with the explicit purpose of allowing remixing:

‘In short, Fountain is an archive of 501 (and counting) images created by Bård Ionson available for you to remix and resell. It is geared for the dynamic experimental crypto art markets. Composed of curated image from a GAN model created and trained by artist Bård Ionson. It is designed for an artist to make a remix, collage, meme or supporting elements in video art to remake art they can sell on NFT markets. [..] To sell the art you must make it into your own art with modifications or remixing that reinterprets the work in your personal artistic way. (You are the artist.) The new art you create must attribute Bård Ionson and provide at least the url to the original work. [..] Hopefully the sharing and remix chain will be unbroken. Once an artist remixes the original artwork they can choose to apply the Remix License to it. Then other artist can build on that artwork for something new.’

This initiative largely comprises my ‘nesting’ argument, although the linking of artworks is solely in the metadata. Maybe the true ‘nesting’ of NFTs could create some additional possibilities. I’ll discuss some in the next paragraph.

Nesting NFTs

So why a ‘Nested NFT’ and not just an on-chain verification like Verisart’s or providing links in the metadata like the Fountain project? Let’s look at ‘Nested NFTs’ – as designed by Charged Particles – in a little more detail.

‘A Nested NFT is an NFT that acts as a container for digital assets and other NFTs. NFTs minted on the Charged Particles protocol, and other NFT platforms can hold NFTs and blockchain-based assets. So one NFT can serve as a portfolio that holds other NFTs; in the same way that a folder can hold pieces of paper. A Nested NFT can contain ERC-20, ERC-1155, and ERC-721 tokens. The NFTs are time locked and programmable; allowing interest to be directed into other wallets; and royalties and annuities directed towards creator wallets. [..] Nested NFTs allow for multiple artists to collaborate on a collection, include work-in-progress images, maintain engagement with their collectors over time, deliver content gradually, distribute social tokens or other community rewards, [..].’

So the Charged Particles protocol allows you to add not just (unlockable) digital assets, but also tokens and coins. ‘Other people can [even] add tokens to your NFT just like any wallet. However, only the owner of the NFT can discharge the interest or take the tokens out.’

So my argument is not necessarily for all NFTs to be ‘charged’ with coins or social tokens, unless they are ‘tokens of appreciation’ to the collector 😉 (If I understand correctly that’s the @PAK route with the $ASH token, although it appears to have destructive consequences for the art). It is more that, in my opinion, it would be great to have all the elements that are related to an artwork (immutably) connected to each other. And especially allow for the documentation of the whole journey – from genesis to evolution – of an artwork.

And maybe, for some projects, it could be interesting to add some incentives into the NFT (targeted at collectors, curators, remixing artists or others) to proliferate the work, but that’s optional.

Comment

Comment

“Party at My Place”🕺💃 revisited

In May 2008, I presented at “The Web and Beyond - Mobility” conference in Amsterdam – thanks once again @Peter Boersma – with a talk named ‘Party at My Place’ 🕺💃. The premise was a vision, dream, desire that websites would evolve into virtual gathering grounds. You can see the original talk 🎤and slides 📝 at the respective links or the embed below.

a vision, dream, desire that websites would evolve into virtual gathering grounds

Now, more than 12 years later, it seems – partly due to some unforeseen 🦠 circumstances – that my dream is becoming reality, and also has a name: #VirtualSocialPresence (as coined by – who else – @Chris Messina).

There has been a range of mobile and web applications rolling out in recent months that indeed allow you to virtually gather and socialize as you would in real life with added digital benefits of interactive elements, backdrops, superimposed personalizations and the potential of becoming a true virtual homestead (of course keeping in mind the obvious drawbacks of the lack of physical/tactile interaction – we are not there yet).

virtually gather and socialize as you would in real life with added digital benefits of interactive elements, backdrops, superimposed personalizations and the potential of becoming a true virtual homestead

Here I’ll be discussing the different applications that I encountered and what I hope it will evolve into, including the 9 key elements for a true #virtualsocialpresence.

Virtual Hang Out Apps

Driven by the global pandemic and the consequential shelter in place/work from home (#WFH) necessity, a number of pre-existing apps that allow you to do an activity – like watching a 📺 TV-show – virtually together with your friends, gained traction. To name a few: Squad (since acquired by Twitter), Houseparty and Rave.

Slightly different, but all the more relevant, is Twitch. Especially in combination with (watching) eSports 🎮 it gained even more popularity (see also Forbes: Some Of The Biggest Names In Sports Are Taking To Twitch During Coronavirus Lockdown).

All these apps focus on allowing you to virtually gather as a group and ‘hang out’, replicating as many of the regular interactions among a group of friends as possible, but not necessarily augmenting the experience.

Virtual Work Environments

A similar thing happened for work-related gatherings. Videoconferencing went mainstream with the likes of Zoom and Teams. Pre-existing workspace sharing apps like Miro and Mural replaced whiteboard-meetings and ‘brown paper’-sessions. Going beyond these ‘screen sharing’ 📌 apps, and focusing more on the personal interaction are examples like mmhmm, With and MakeSpace (work in progress). A key feature in With and MakeSpace is ‘spatial audio’ where you listen into👂conversations by moving closer, similar to what would happen in an open plan office space.

We all suffer from screen-fatigue by now, but in the words of @Kile Ozier it is A Fortunate Confluence…of Unfortunate Events; “For so many for so long – especially in particular demographics, screens have been held and characterized as soulless and obstructive; virtual barriers to personal contact, impersonal sappers of energy and life. I think, though, that this has begun to subtly transform from screens being perceived as impediments to being treated as the portals they are; exponentially accelerated at this moment of crisis.”

“The barrier has gone; the people are the experience. Screens perceived as hard barrier is set to evolve into them being embraced as the windows of opportunity that they are.” – Kile Ozier

@Raph D’Amico explores this further in an extensive thread (see below) and states “What the Great Unbundling of Presence means is that instead of trying to replicate the full experience of being together, we're going to end up with tools that let us encounter each other in radically different ways. Sometimes just a subset—perhaps super-humanly amplified ...or rich & full embodiments that are *different* than IRL, and emphasize different abilities & needs for connection. The Zoom grid is the faster horse. The next thing is going to be *many* next things, because no single solution will fit every need.”

Virtual Celebrations and Exhibitions

Getting closer to what I imagined in 2008 are the virtual celebrations 🎉 – e.g. for graduation or a religious ceremony – that started taking place in for example the game Animal Crossing. As well as the dedicated exhibitions (e.g. art displays 🖼 ) in metaverses like Decentraland and Cryptovoxels.

What makes these virtual events special are the fact that these environments are build or decorated for the occasion and people also dress (their avatar 👩‍🎤🦸🏿🐼🧝‍♀️) for the occasion; all getting closer to an actual party.

Still, there is room for improvement. As I stated back in 2008; “I want my friends to come over to my place [i.e. my webpresence], to join me in the way I live and to show off my stuff.

Virtual Personality Extension… (through 9 key elements)

Even after we conquer the pandemic 💉 there are several good reasons – limiting ✈️ 🚗 travel to ease the burden on the planet 🌍 for one – to keep the good elements of living part of our lives online. But instead of ‘making the best of it’ we should strive to make it the best.

Keep the good elements of living part of our lives online

So the updated version of my vision, dream, desire for a #virtualsocialpresence, fit for a virtually real gathering, has all of the above mentioned innovations and some additional elements to make it more personal, as in personality, and augmented real.

@Greg Isenberg lists a number of elements for a next generation social network, that align with this:

  • Group-first, broadcast-second (ie: Discord, Clubhouse)

  • Has "opening hours:" app changes based on time of day (ie: HQTrivia)

  • Pseudonymous

  • Elements of surprise

  • Status and reputation even more important

  • Come for tool, stay for vertical network

I would add;

  • Decentralized: it’s your place, on your terms (e.g. Mastodon)

  • Augmented: it’s reality plus the benefits of virtualization (e.g. Snapchat Lenses, Instagram AR filters)

  • Containing real estate: your webpresence – as in the domain or handle – has an actual value or elements of value in it (e.g. NFT’s)

Let’s go over these 9 key elements in a little more detail in the next paragraphs.

1. Group-first; inside vs. outside

It starts with you and whoever you decide to interact with, i.e. ‘let in the door’🚪. That interaction is private, in principle, but can be public if so desired by the group. So there is more going on inside (for the ‘insiders’, the intimate contacts) than outside, but there definitely is an outside, a facade 🏛 (in every meaning of the word 🎭).

So, of course, on every level you might portray a better – a more realistic? – version of yourself and those are the bits and pieces you broadcast, but depending on the inner circle(s) you and the group establish, that’s what they get to see. So you get a more fluid transition between your public and private online/social presence (see element 3 as well).

2. Opening hours; when is the party starting

Almost all human interaction is planned, or at least prepared; you dress before you leave your house or close the door or curtains to delay interaction with unexpected visitors. And for special events you take extra measures, do special preparations. Finally, to be actually present you have to schedule 🗓 the time that you can be present. Outside this time slot there can be other forms of engagement (e.g. via a 🤖 chatbot, like the one I made for myself on Messenger), but not at the same, personal, level.

3. Pseudonymous; being the friend of a friend

When planning a party you might think: ‘the more, the merrier’. But to a certain extent of course; you try to avoid unwanted guests. Also, when going to a party, you want to know who you will know. To solve both, there is the ‘mutual friend’; the person that can introduce you to the new group or can introduce others to your party.

Of course, there’s nothing new here. But maybe a #virtualsocialpresence allows for a more nuanced and gradual scale between levels of trust; going from stranger (under a pseudonym), to friend-of-a-friend – maybe with a moniker – to a relation on ‘first name’ or even a verified (see Recognizing unique individuals) real name basis.

4. Element of Surprise; the sneak preview

The experience of a real(ly) good party usually spills over into the public space: extra cars/bikes/scooters parked around the venue, the waiting line in front of the entrance, the music/conversation/laughter seeping through, flashes of people having a good time 💃🕺!

Through selfies and #hashtags this is shared online already, so outsiders know something of what is going on inside, but for a #virtualsocialpresence we need a little bit more, I think. We can already eavesdrop👂, or ‘listen in’, on gatherings in apps like Clubhouse (if you are invited to the platform in the first place), but that’s almost the whole experience; there’s no surprise factor in being able to join the discussion. There should be that ‘you had to be there’ factor.

Maybe the the “Proof of Attendance Protocol” can play a role here, but more on that in element 9 (below).

5. Status and Reputation: showing off

Rainbow NFT collectibles.PNG

Status symbols are of all ages, so definitely also the Virtual Age. And where status and reputation on (online) forums are usually linked to contribution to and credibility within a specific community, what I’m looking for here is the more universal – or banal – kind: bling 💍 and exuberance 🤑.

The most explicit, and excessive, example of a status symbol in the virtual space is Cryptopunks. They are an example of Non-Fungible Tokens #NFT, which is an asset on a blockchain, which is techno-babble for a limited edition pixel-art character, currently valued at a minimum of $150.000,-! 😳

A slightly more down to earth/traditional example is the collecting and trading of digital art 🖼 and (sports) memorabilia, again in the form of NFT’s.

You can show off your possessions in virtual galleries or for example The Rainbow 🌈 Ethereum Wallet App, which let’s you view the collectibles on a certain address 👉.

You can argue about the inherent value of a digital image (see @Coldie’s response below), but ideally your #virtualsocialpresence should have a background canvas for all your online encounters. It then can be adapted, depending on the type of encounter (e.g. business or pleasure) and the kind of impression you want to make.

6. Tool and social network; let’s mingle

The premise of Greg Isenberg’s mention of the tool vs. the network, is the idea that each (new) social network usually has a unique feature that draws in the crowds. The newness then wears off – or is copied profusely – but people stay for the community that has formed regardless of the feature.

In the virtual party analogy this is no different; the venue makes the party 🎉. Specifically, you would want the equivalent of seats and tables in the form of spaces/rooms for your guests, so they can mingle in subgroups. The before mentioned tools like With and MakeSpace already provide features like this.

So, to host your own party you need to select the best platform for your needs; that is the social experience you desire to create for yourself and your guests.

7. Decentralized; a place to call your own

An important distinction in all this is that the whole experience takes place at your (virtual) place. So not an account on platform X (which than controls/owns whatever takes place), but an instance of X, or Y or Z on your own server, chain or site.

Examples of platforms/social networks that allow you to create your own instance or at least let you own your own content are Mastodon, Cent, and LBRY.

Ideally the platforms are all interoperable, so my choice for decorating myself (see element 8) and my virtual place doesn’t prevent me from visiting yours and vice versa.

8. Augmented; dress for the occasion

In 2010 I made an attempt at an AR app using the first version of @Dutchcowboy’s (with others) Layar. The idea was to be able to ‘virtually decorate yourself’. Called PeRSoN.NeL, you could attach images to your location, which were visible as an overlay in the camera view of the App. In the animation you can see how it worked in theory, and in the image carrousel what it looked like in practice.

The technology since then has come a long way, see for example the aforementioned Snapchat Lenses (that even come with AR glasses 😎!) and Instagram AR filters.

A whole new dimension of personal expression

These tools now make it possible to truly augment your virtual presence; be it in online meetings or actually out on the street. Vogue Business refers to it as Digital Only Beauty (referencing work by @inesalpha, also see below), but given the examples below, I think it goes well beyond beautification and turns into a whole new dimension of personal expression.

The RTFKT example also ties in the status symbols in the form of NFT’s as mentioned in element 5 and which I will revisit in element 9 below. @RTFKT Studios is ahead of the curve but – in their own words; “The RTFKT project was scheduled to take off in 2040, but the human development in consciousness has accelerated faster than anticipated.” 🧐

And finally @Gambit takes it full circle with ‘wearable NFT’ the “Ethereum Block Chain”:

9. Real Estate; prized possessions and gifts

As should be clear by now, I’d love to be able to invite people over to my virtual place, interact with them, have a unique experience and make a lasting impression.

When you would enter you would see something like this:

Virtual NFT/Netart background (N.b. The animated GIF makes the animations faster then normal)

Virtual NFT/Netart background (N.b. The animated GIF makes the animations faster then normal)

A background featuring a photo of a past vacation and some of my prized digital possessions:

Am I showing off here? Yes. But only to prove a point 😉.

We could discuss the art, the same way we would discuss the art on the walls of my home, or the plants 🪴 in my garden for that matter. It goes way beyond the header image on a social network profile page, not to mention the potential of personal (AR) filters, as mentioned in element 8, in the form of NFT’s (see the @LateFX example on Filta below) .

Our respective Augmented Reality NFT’s could turn the whole thing into a postmodern Bal Masqué 🎭!

POAP badge.PNG

And actually that NFT could be the access key 🔑 to the party. As pointed out by @David Skilling the sale (or distribution) of a limited edition NFT gives the holder exclusive access.

An early example is Stoner Cats 🐾 by Mila Kunis and Ashton Kutcher, where owning one of the NFT cats 🐱, gives you access to the animated series.

Or the NFT becomes a gift/goodie bag 🎁, when you leave the party. See for example the SuperRare Collector Badge using the “Proof of Attendance Protocol” 👉.

It’s the unique souvenir from a memorable event.

In conclusion

To me it seems all the elements for a true #virtualsocialpresence are here, they just haven’t been combined yet. So what does it take to 🙌 ‘get this party started’ 🙌 ?

And even though I see #virtualsocialpresence as an inherently personal matter, I do think there is a business side to it; after all we don’t all build👷‍♀️– or even buy – our own houses either. So a commercial provider that sets up your virtual presence and provides basic amenities is logical, but the line of privacy and (data) ownership will be drawn way more clearly and absolutely than it is right now. So what is yours, stays yours, and if you move, it moves with you (within the realm of technical possibilities).

The line of privacy and (data) ownership will be drawn way more clearly and absolutely than it is right now.

Also, the creativity, versatility and promise of NFT’s can not be overstated. In my opinion they are the catalyst of many more things to come, or in the words of @Josh Ong:

Party on 🥳!

Comment

Comment

A Chatbot for the 🚴 Tour de France

In a new assignment for the NOS – the Dutch News Broadcasting association – we created an extension to the existing TV/Radio/Web coverage of the 2020 Tour de France. We added the option to follow the liveblog updates via various Instant Messaging platforms. In addition we created a chatbot that allowed users to ask about riders, teams, stages and results.

Praatplaat_tour_de_france-–-noslack.png

For more information see the NOS Lab website (in Dutch).

Comment

Comment

10 Easy Questions towards a sound Voice 🗣 Strategy

It's time to talk.png

Voice and other Conversational Interfaces are the latest mainstream technical innovation that could have a major impact on your business. I therefore present you with 10 easy questions – which might be a bit tougher to answer – in order to come up with a sound Voice Strategy.

 

1. Where are you now? Getting your bearings

Voice interfaces are being adopted faster than any communication technology before it:

Smart speaker market penetration.jpg

Next to dedicated speakers – like the Amazon Echo and Google Home – they are in cars, mobile phones, earphones, appliances (think Smart TV’s), etc.

It’s estimated that there will be 8 billion digital voice assistants in use by 2023.

And as recent Microsoft research showed: Voice assistance is becoming the norm, as 72% of surveyed respondents reported using a digital assistant in the past 6 months.

 

2. Do you need a voice strategy? Or 20/20 hindsight

hindsight.png

Did you need an Internet Strategy in 1990?

Did you need a Search Strategy in 2000?

Did you need a Mobile Strategy in 2010?

Even if you don’t have a vision on the future of voice interfaces yourself, others do. So keeping tabs on developments in voice (and knowing what they are) is a strategy as well.

Keep an eye out for remarks and terminology like this:

Voice use quotes and terminology.jpeg
 

3. Do you have an open mind? Possible paradigm shift

Voice is not the “faster horse”, so don’t bluntly compare it with the previous interfaces. Take a look a this graphic by voice visionary Brian Roemmele:

Interface graph.jpg

Each era had its respective winners – from IBM to Microsoft, Google and Apple. Who, and in what way, will dominate the Voice Interface era (which, according to above mentioned Brian Roemmele, might be the last interface) is still very much undecided.

 

4. Does your business interact with humans? Voice is the most natural

Our brains are evolutionary wired for voice. Voice is the human I/O.

Everything you type and read is the work product of a “silent voice” in your brain.

The brain processes are visualised in this graphic by again Brian Roemmele:

Brain processing.png

And ~100% of the information from this phonological loop/speech is retained for ~400 seconds and may be processed.

In comparison:

  • ~97% of the information of the visual cortex become exformation. This means it’s lost immediately!

  • ~75% of information of the auditory cortex become exformation. Again lost!

 

5. How does a user invoke your product/service? By asking!

Expect, in the near future, for a user to simply state an intent to the nearest available, trusted, voice assistent and have it fulfilled according to his/her previously uttered or learned preferences.

DILBERT © Scott Adams. Used By permission of ANDREWS MCMEEL SYNDICATION. All rights reserved.

DILBERT © Scott Adams. Used By permission of ANDREWS MCMEEL SYNDICATION. All rights reserved.

This may indeed seem awkward now, but in order to be or stay part of this flow you need to ‘deconstruct’ your users needs into intents. And learn what wordings (utterances) are used and in which context.

In the short term you will need to claim your voice search query and in the long run expect users to switch assistants and/or queries very reluctantly.

 

6. Is the interaction near- of far-field? They are different use cases

Near-field and far-field refers to the distance between the speaker/ear(s) and microphone/mouth(s):

From near-field (left) to far-field (right)

From near-field (left) to far-field (right)

So for near-field use cases you should consider users with Apple AirPods earphones with built-in ‘Hey Siri’ detection or the rumoured Amazon Echo Earbuds. Also a voice assistant in a car with a single occupant can be seen as near-field. The point is that the interaction can be considered confidential, because it can not be overheard.

In the case headphones/earphones you should also keep in mind the future possibility of gesture control (i.e. taps and head movements) for ‘silent’ user input or feedback.

The remaining far-field use cases are those where the interaction is out in the open, where the conversation may be overheard or is intentionally shared between different users.

 

7. How smart is your AI? Claim your domain knowledge

Artificial General Intelligence is still a long ways away, but that doesn’t mean you can’t master a specific domain already.

AI definitions.png

Start by defining a narrow domain and feed your artificial intelligence, meanwhile managing user expectations.

AI improves on iterations and variations, so start generating as many as possible.

 

8. Are you making preparations? Start ‘dogfooding’

In order to get your voice/chat service and the underlying AI to a minimal viable level you have to start feeding and testing your system. An internal version of your service (i.e. ‘eat your own dog food’) is essential.

Kayak App with ‘Add to Siri’ button which allows you to link a dedicated spoken intent like ‘My travel plans’ to the array of commands.

Kayak App with ‘Add to Siri’ button which allows you to link a dedicated spoken intent like ‘My travel plans’ to the array of commands.

The ideal place to start is with the consumer care agents (which also makes it an opportunity, not a cost).

Each customer inquiry and resolution is ‘free’ input for your system. And the agent can be the (temporary) controlling and mitigating interface between your fledgeling conversational system and the end-user.

You can already take advantage of some low-hanging fruit. For example by adding the “Add to Siri” button to your mobile App, so users can start to become familiar with the possibilities of voice control. For more information on the potential see this article on ‘Siri shortcuts’. 

 

9. Do you have screen/human follow-up in place? You must

It’s not voice only, but voice first. This means that in some situations the response to a spoken request needs to be visual, e.g. an overview, (long) list of options or an image on an app/watch/speaker screen. This is called multi-modal interaction.

One reason for this is that humans speak faster than they can type, but conversely we read (and visually scan/compare multiple data points) faster than we can hear. However, it should be purposeful use of the screen and not just fallback, because of limitations in the context awareness of the voice assistent.

Handover.jpg

Also, in many cases for now a human respondent is still more intelligent and therefore faster (with the bot doing mundane and preparatory tasks). So the handover between bot assistent 🤖 to human support 👨‍💻 needs to be seamless. 

See for example Audible, Amazon’s e-book company, that is Offering Live Customer Support Through Amazon Echo Devices.

 

10. Do you have the right expertise? Voice and Conversation Design Skills

Even though Voice is ‘yet’ another digital interface, the skills for designing one are quite different from those for web and mobile.

Mindmap.jpg

Especially the psychology and dynamics behind a good dialogue are new. Not to mention the NLP and AI components.

So if you have trouble formulating the right answers to the previous questions; get in touch via almar@virvie.com and we’ll schedule some time to talk!

Comment

Comment

Creating the Digital Experience Interface

Or a ‘cinematic’ webpresence with a chatbot voiceover

Sketch for Digital Experience Interface.jpg

Digital experiences are what fascinate and drive me. That’s why I admire new media art or #netart (e.g. MuchBetterThanThis.com by Rafaël Rozendaal ) and I strive to create inspirational and lasting digital presences in whatever digital medium that is applicable.

It’s with this in mind that I propose the idea of a web presence with cinematic traits and embedded chatbot, and critique the existing examples of digital experiences.

Introduction

Let me first state that — in my opinion — it’s impossible to actually create or design an experience; a person has an experience. And unless you’re a geneticist or neuroscientist running a mind control experiment, you don’t get to determine the actual experience (sorry UX’ers 🙁).

You can however try to influence the experience, by telling a story, compel actions/reactions, evoking emotions and designing the interface the person interacts with. I prefer to call this the Experience Interface, rather than the User Interface. It’s more a broad vs. narrow definition, where for me the User Interface is the controls the user is given and the Experience Interface is the totality of the environment and how it evolves during the interaction.

A number of factors is important here.

Story and Cinema

Our evolutionary hardwired way of sharing experiences is storytelling. According to Wikipedia “the term ‘storytelling’ can refer in a narrow sense specifically to oral storytelling and also in a looser sense to techniques used in other media to unfold or disclose the narrative of a story.” It predates writing, but rock-art (as supporting material) may have served as a form of storytelling for many ancient cultures.

So it’s not story reading, not story watching, but story telling. And (again via Wikipedia):

“Crucial elements of stories and storytelling include plot, characters and narrative point of view.”

Cinema, or motion picture, is a very compelling way of storytelling, and basically a big step up from the rock-art. It also uses some techniques that greatly enhance the conveying of a story, specifically: the establishing shotpanningtilting and zooming.

To clarify “An establishing shot in filmmaking and television production sets up, or establishes the context for a scene by showing the relationship between its important figures and objects. It is generally a long or extreme-long shot at the beginning of a scene indicating where, and sometimes when, the remainder of the scene takes place. (Wikipedia)”.

Panning is swiveling the camera horizontally from a fixed position. Tilting is the same, but rotating vertically. And finally, zooming brings the object closer (close-up) or further away (wide shot).

To actually craft a good visual story we can take some pointers from screenwriting. In the chart below you see the ‘3 Act Structure’ as defined by the late Syd Field.

3 Act Structure by Syd Field, author of Screenplay

3 Act Structure by Syd Field, author of Screenplay

The main elements are the plot points and working your way towards the resolution. The ‘action’ and ‘confrontation’ shouldn’t be taken literal in the sense of ‘physical’; the confrontation can also be emotional and the actions/plot points manifested as certain insights during the progression of the story.

Utilizing these storytelling elements and techniques have proven to be an effective way of influencing human experience.

Conversations

Maybe even more impactful than a story, and certainly essential to the dissemination of a story is the conversation about the story. The force of a story grows exponentially with the speed and ease in which it is shared (and no, this is not achieved simply by adding a ‘share’-button).

And with conversations there’s a lot of neuroscience at work.

Judith E. Glaser, in her book Conversational Intelligence states “Conversations have the power to change the brain — they stimulate the production of hormones and neurotransmitters, stimulate body systems and nerve pathways, and change our body’s chemistry, not just for a moment but perhaps for a lifetime.”

And “Conversations impact different parts of the brain in different ways, because different parts of the brain are listening for different things. By understanding the way conversations impact our listening we can determine how we listen — and how we listen determines how we interpret and make sense of our world.”

“Language plays a role in the brain’s capacity to expand perspectives and create a ‘feel-good’ experience” — Judith E. Glaser

The importance of this for every product is stressed by Ross Mayfield in his article Products Are Conversations. He states “Your customers want to talk with you […]. They want to be heard and want you to understand their needs. It’s your job to enable these conversations, and figure out how to have them at scale.”

I also like to borrow his quote:

“The single biggest problem in communication is the illusion that it has taken place”

Which he attributes to George Bernard Shaw, but according to Quote Investigator 🔍, it should be William H. Whyte 😉.

So the better conversation you have or incite, the better the experience, and the better your message gets across.

This is also what makes conversational interfaces (like chatbots and voice-assistents) so powerful.

Experience preparation

Like language and visuals there are other dramatic techniques you can use to influence an experience. For this I’d like to refer to Kile Ozier ‘s 5 tenets of Experience Creation (he can create experiences, he’s just that good):

N.B. The explanations contain my paraphrasing

  • Exploration of Assumption; what is the audience assuming when accessing the interface. And can you circumvent or overcome or rather leverage and enhance those assumptions.

  • Liberation of Preconception; preconception is a conversation(!) going on inside the head of the audience, i.c. the user, reassuring s/he knows what’s going to happen. If you can liberate them from this, you can give them a fresh experience and renewed excitement.

  • Comfortable Disorientation; feeling safe in not knowing what’s next. Effectively executed, this technique results in an immediate, deeper level of trust on the part of the audience, i.c. the user, and an intangible yet greater willingness to suspend disbelief.

  • Successive Revelation; too much, up front, can completely overload the audience early and virtually numb them to further sensation, empathy or inspiration; leaving them inured to subtlety and nuance as the Story or Experience unfolds. You should try to shape the arc of storytelling by balancing curiosity and revelation.

  • Subliminal Engagement; inviting the audience, i.c. the user, to participate in the creation of their own experience. Allow for the journey or journeys to be completed in the imaginations of audience members, i.c. users.

So if you truly want to create a digital (user) experience, you’d better be willing to put up a show 🎭 !

Application to web presences

The above mentioned fundamental elements and techniques with regards to (visual) storytelling, and experience preparation, are — in my opinion and perception — currently lacking from most digital experiences. Even the ones that are most dependent on story, like brand websites.

But there are some notable exceptions!

For example the Mercedes-Benz campaign site for their new, all electric, vehicle the EQC:

Animated GIF. See website for the full experience (n.b. on mobile the intro tilt doesn’t seem to work)

Animated GIF. See website for the full experience (n.b. on mobile the intro tilt doesn’t seem to work)

Let’s brake it down:

“Electric now has a Mercedes”. Now there’s a story headline! And there is a beautiful establishing shot (with tilt). The story unfolds. Reveal after reveal. There are pans and tilts (in intro and outro) and parallax scrolling (top layer scrolling faster than bottom layer to create depth)! All-in-all a visual feast, with some subliminal engagement: you control the scroll and can pan/tilt around in the establishing shot a bit.

What’s still lacking is conversation, or any meaningful interaction for that matter. Which is a shame, because it would have been a great opportunity for Mercedes-Benz to get a first impression of the user’s reaction to their new product.

On to another example: Apple’s iOS12 website, introducing their latest software update for the iPhone and iPad.

Animated GIF. See website for the full experience.

Animated GIF. See website for the full experience.

Again a break down: First the establishing shot is reminiscent of the queue for the ride at the Wizarding World of Harry Potter at Universal Studios in Orlando.

Harry Potter portrait wall.jpg

Talking Portraits and all. Some ‘Comfortable Disorientation’, to which Kile Ozier notes: “Theme parks strive for this all the time, often with what I call the Venice Effect; bringing guests through a queue that is often labyrinthine, usually feels a bit cramped — limited sightlines, low ceilings — to then be suddenly released into a space that seems vast by comparison.”

For a fluent version with sound see YouTube

For a fluent version with sound see YouTube

Of course Apple here has the benefit of the iPhones/iPads as ‘natural’ frames. There’s some ‘successive revelation’ in introducing the different features with auto-playing animations, but other than that there’s little more experience to be had.

Again the lack of interaction and conversation is a missed opportunity.

And to show that it is possible, see the Typeform blogpost on the rise the conversational interface:

Animated GIF. See website for the full and personal experience

Animated GIF. See website for the full and personal experience

I encourage you to give it a try yourself by checking out the blogpost and putting in your own responses. Even though the ‘conversation’ is preformatted, by making your response choices, you get the feeling you’re sharing your view and engage on a basic level.

I first saw this on Adrian Zumbrunnen ‘s personal website, which does allow open ended input. This, at least for me, immediately raises the bar for engagement, because what happens if he responds in person 😳? So it definitely triggers an emotion on my side, which is a good thing.

Chatbots of course make scaled conversations like these possible and can gather a great deal of customer insight. And the Typeform example shows the potential storytelling power and enhanced experience when it is seamlessly integrated in the total experience interface.

The ideal

So an ideal digital experience interface, to me, should contain all the above mentioned storytelling elements and utilize all the available techniques (with moderation of course).

What I envision is basically a merger between the before mentioned Mercedes-Benz EQC site and the Typeform blogpost;

The interface commences with setting the scene in a visually attractive and seductive way. Than the chatbot comes in as a voice-over, or narrator, and engages you by giving pointers as to how to further explore the interface, triggering different user controls and visual attractions; unfolding the story with you. It asks questions and gives options, actively soliciting your feedback.

To be clear the chatbot is not ‘on top of’ (which is often seen as ‘live chat’, and completely disengaged from the site content and story) but embedded, taking or talking you through the story, actively engaging you and asking for your feedback in the moment.

And more…

If we could have all this, there’s one more thing that would be the icing on the cake 🍰: Sound. Or better ‘sonic branding’ 🔊.

Of course if you add video or podcast to a webpresence, you have sound already. But I see it as a stand-alone feature, supporting the overall experience.

To clarify, sonic branding, or sound/audio branding, is “the strategic use of sound … in positively differentiating a product or service, enhancing recall, creating preference, building trust, and even increasing sales. Audio branding can tell you whether the brand is romantic and sensual, family-friendly and everyday, indulgent and luxurious, without ever hearing a word or seeing a picture. And it gives a brand an additional way to break through audiences’ shortened attention spans.” (Wikipedia)

And again I’m not thinking jingles, but embedding sounds that you associate with the topic or product. To give you an idea listen to👂 the NIKE Freestyle video based on an idea by Jimmy Smith:

Wouldn’t you feel more engaged if you visited a website selling athletic shoes and hearing this in the background?

I look forward to your feedback and hope to create some of these ideals with you.

Comment

Comment

Siri: “Which wife?”

Wait! What 😳? I asked: “Hey Siri. What can you tell me about my wife?” And I was expecting or anticipating to get some general information like maybe her age, her occupation and perhaps some social media updates. As I was giving my wife some examples of voice commands, I most certainly did NOT expect to get Siri’s, Big Love inspired(?), response: “Which wife?”.

When we both quickly looked up at the screen, it turned out that I had two separate contact entries for my wife’s name on my phone, and Siri needed to know which one I was looking for. So Siri didn’t have a problem hearing me, also did know who my wife is, did have the information available, but still stumbled on a seemingly minor detail.

So much for live demoing 😉, but it’s a fair example of the current state of voice assistants. A recent Digital Assistant IQ test did put Google ahead of Apple’s Siri, but I don’t expect Google to have handled it much better.

However, I’m increasingly growing fonder of using voice commands for certain tasks, as they just give a better experience (i.e. more convenient) than a gesture or typed input. In general, repetitive, daily tasks are most likely to be more convenient through a voice command. 

For me it’s “Turn on/off the lights”, “Set an alarm/timer for..”, “Is it going to rain?“, and (even if the answer to the former question is affirmative); “Start an activity outdoor walking” (because I’m already holding the dogs leash, the door keys 🔑, possibly an umbrella 🌂, and a ball 🎾 the dog 🐶 is trying to grab).

They’re basically cues or simple questions that I know my connected devices can handle (and don’t yet handle on their own automatically).

[intermezzo regarding connected lights 💡: voice operating your (connected) lights, in my case Philips Hue indoor & outdoor through Apple Home, is the same difference as using a landline ☎️ versus a smartphone 📱; the experience and range of possibilities is almost incomparable.]

Today (July 26, 2018), in the Netherlands, a slew of new ‘Skills’ became available on Google Assistant as they launched their Dutch language capability.

It included among others two national news broadcaster (@NOS, @RTL), our two airlines (@KLM, @Transavia), one of the biggest banks (@Rabobank) and two of the biggest energy suppliers (@Essent and @Eneco) that have connected thermostats. And, finally, our biggest grocery stores (@AlbertHeijn and @Jumbo), online retailer (@Bol.com) and our national postal service (@PostNL).

The fact that all these big players are present at launch does give an indication of the expected potential of voice assistants. I haven’t been able to test them yet, but from what I understand from the various press releases the possible actions are mostly status updates and generic commands, some of which might make more sense (to me) than others.This is a good thing because through experimenting with all these skills you can on the one hand experience the limitations and on the other see the possibilities of voice commands and interactions.

For understanding ‘Voice’ it’s especially important to get an idea of all the implications/variations of a simple question; as you see on the screen recording below, Siri is changing it’s understanding of my question as I speak.

Siri setting alarm demo.gif

For example; if I had paused a little longer after stating ‘6:30’ it would have set an alarm for the same day. You can try it yourself with any kind of voice command and see the different variations it goes through (at least with Apple’s Siri you can see it happen).

In my experience comparing voice interactions with typed or gestured interactions some routines are just faster, where others are just cumbersome. This is keeping in mind the complete interaction: question and answer, where the latter can be simply executed (e.g. lights go on/off), spoken or displayed(!). 

As Ben Sauer pointed out in his #OpenVoice presentation: “Using voice input is great because it’s faster than typing. [but] Listening to voice output is hard because it’s slower than reading.

The faster voice commanded, fully completed routines will stick.

Keeping this in mind I can look forward to a lot more voice commanded (kitchen) appliances, so I don’t have to operate them with my dirty or otherwise occupied hands. And I can imagine that setting up devices through voice commands can give a much richer and more fulfilling experience.

Comment

1 Comment

A chatbot for news

Last year I got the opportunity to create an update for the Chatbot of the Dutch News Broadcasting association NOS and we launched it right before the end of the year.

It's a Facebook chatbot that gives you daily news updates and allows you to search their archives. You can find it here: m.me/NOSupdate

NOS Update chatbot

For details about the bot and how it is setup I kindly refer you to the Medium article I wrote about it.

1 Comment

Experimenting with Chatbots

The best way to learn about bots is to build and experience them.

For my business bot I used OctaneAI.com, a chatbot creation platform that specifically allows you to create 'conversational articles' or 'convos'.

For my personal bot I used Chatfuel.com, a WYSIWYG chatbot creation platform with all the main Facebook Messenger capabilities and even some Artificial Intelligence functions.

Please click the links below to interact which each bot, or scan directly from Facebook Messenger:

Link to VIRVIE's bot

Link to VIRVIE's bot

Link to Almar's bot

Link to Almar's bot

Comment

An updated web presence

Today, after more than 10 (!) years, I updated VIRVIE.com. It was long overdue, but I was just too fond of the original design.

The original site, or VIRVIE vintage, is still available (just follow the link), because I think it's important to also preserve our digital heritage.

It also marks the end of VIRVIE's ventures in the Web and App space. One of these ventures was the creation of PeRSoN.NeL. It started out as a web based URL shortener, which allowed to at mini artworks to your links. It then evolved into an Augmented Reality App which allowed a user to show these same artworks, and now even in 3D, on there location in reality.

The evolution of PeRSoN.NeL

The evolution of PeRSoN.NeL

After this the App pivoted into diary App, which is still available from PRSN.NL, but no longer maintained.

Comment