April 2019


How We Come to Trust AI in Our Daily Lives


Published in Dutch on:
Emerce

You would have a hard time finding anyone in the Western world whose life has not been altered by digital technology in the last 20 years. Since the birth of the Internet (as we know it) in 1990, we have seen the emergence of, among countless others, email, laptops, smartphones, apps, tablets, VR and self-driving cars. And who knows what next year will bring, or the year after that?

T


Author
Oliver Blake

The digital landscape is in a state of constant flux. More aspects of our lives are becoming digitalized: interactions with organizations and business, relationships with friends, the way politics and activism are organized or the way we search for love. But this relatively new state of affairs does not come with a rulebook that tells us how to behave. This research – focusing on everyday interactions with artificial intelligence – begins to uncover the ways in which people navigate their digital lives, setting boundaries and limitations in order to justify technology’s presence in their homes and pockets. And the ways in which they balance the risksEmerging technologies, in particular AI, cause ‘moral panic’, a mass expression of fear that some evil threatens the well-being of our society. that might come with it.

Beware the killer robots

Of all the recent technological developments, one has captured our collective attention like no other: artificial intelligence (AI). Whether it drums up terrifying images from apocalyptic Hollywood films featuring robots running amok, or excites us about the potentials of playing God, the very notion of AI is creating a buzz. Silicon Valley hails it as the next ‘big thing’. News and media would have us think that AI will kill us all or take our jobs. That kind of discourse positions the development of AI as something entwined with risk, putting both our economic stability and the survival of the human species in jeopardy. It is not unusual for new developments to have their naysayers, detractors and scaremongers. Such voices paint a picture of AI being something fundamentally risky, but risk is something seen throughout society in various shapes and sizes.
     Risk is not exclusive to the field of AI and digital technology. Modern society has become so complex, and we rely so entirely on its complexities, that potentially negative outcomes are to be found everywhere. The outcomes of mass power outages, computing hacks and nuclear fallout are beyond conception. We only know they are catastrophic. This state of affairs led German sociologist Ulrich Beck to characterize the modern world as a ‘risk society’.1 For Beck, modern risks are hidden and beyond our conceptions, so that we are preoccupied by ‘what if’ questions and constantly guessing which risks to occupy our minds with. In recognizing the widespread risks of modern life, British sociologist Anthony Giddens explains that trust is essential for society to function.2 In our complex societies, we must rely on an abstract trust to keep us safe. We are dependent on experts we do not know and will never meet. The manner in which risk and trust respond to one another formed the theoretical base for my approach to the research.

The voice for choice

Wether we choose to trust or distrust AI will shape the role it plays in our lives through the coming decades. No one quite knows what the future will bring, or what course developments in artificial intelligence will take. But what we can say for certain is that the technology will continue to advance, and that machine-learning algorithms are already finding their ways into our everyday lives. How we respond to new technologies today will create their tomorrows. The key to a future in which both user and creator can properly use AI, is to first fully understand the ways AI is used in our lives, and how people come to trust these technologies, despite the sensational headlines.
     Now that artificial intelligence is more widespread, we can begin to unpack some of the issues raised in the media through research. One piece of technology – the voice interface – presents an interesting angle from which to study how AI features in people’s everyday lives.
     Recently, the use of conversational interface devices in the home has become more and more common. People are bringing machines into their most personal spaces to assist them with their daily tasks. These artificially intelligent devices allow users to perform a variety of functions by speaking to the device. Google and Amazon products currently dominate this market. In three-and-a-half years, Amazon sold over 20 million Echo units, while Google’s equivalent device – Google Home – is approaching five million in total sales. In Amazon’s own promotional material for Alexa, they explain that conversational AI can: “communicate in ways that feel natural, solve problems, and get smarter over time.”
     But the ascent of these devices has not been a smooth one. Controversies about the devices often feature in the news, and fresh stories continue to appear. Such reports revolve around the idea that the devices listen to you constantly, storing the data for future use, allowing Google and Amazon to paint complete portraits of consumers. Some kind of ‘digital corporate spy’ that we invite into our homes. The nature and popularity of these devices create an interesting way to study the way people are starting to use AI, and how trust is formed in interactions with new, potentially risky, machines.

Asking the early adopters

To conduct this research, I set out to speak to people who live alongside these machines. Studying the peopleIn the article Bringing the Perspective of the ‘Other’ into Focus, Nebojša Savić explains why it is precisely the qualitative research, stemming from the ethnographic fieldwork that provides the necessary insights into the use of consumer goods, unlike the omnipresent quantitative methods of market research. who interact with the device allowed me to understand people’s motivations for owning an voice interface, how the devices are used, if the users associate risk with AI, and how young children are raised in an environment with a talking speaker that has a name and is ‘intelligent.’ I conducted three in-depth interviews. Petrushka, Lennart and Saloua all live in Dutch cities and can be described as middle-class, well educated, highly tech-literate, with an international, cosmopolitan background. In two of these instances, I saw first-hand how the devices were used in the home. Two participants own a Google Home and the third uses Amazon’s Alexa product. Seeing the devices in use allowed me to recognize how they work and what positions they take in people’s daily lives. Detailed descriptions, drawn from these interviews, are the basis for my understanding of how people live with AI and set their own rules for negotiating the digital world.
     By speaking to those who own voice interfaces, it became clear that a degree of trust in the device exists long before it is unpackaged. Lennart and Saloua had made the decision to get a Google Home, meaning they both put thought and time into the process. Having studied engineering, Saloua works for a large bank in the field of smart chip technology. She is interested in tech that can make her life easier and, after she read an article comparing two popular voice interfaces, she decided the Google Home was for her. Lennart had tried his friend’s Amazon Alexa a couple of times before he got a Google Home. A podcast about the devices supported his decision. As a regular listener, he trusts the host’s opinions. Finally, Petrushka did not buy the Amazon Alexa that sits in her living room. Her husband did. However, it would be a fair assumption that she trusts her husband not to bring a harmful object into the house and into contact with their children. In all three cases, it is evident that the trust that inspired them to own these products comes from multiple places, and cannot be considered in isolation. It is wrapped up in complex social relationships and existing trust networks.
     For two of the participants, the device could be described as a digital assistant that they have tailored to make their routine morning rush run a little smoother. Saloua, in her early 30s, comes down the stairs each morning, enters her living room, and greets her Google Home:


“Hey Google, good morning.”

“Hey Saloua, the time is 7:56am. Utrecht is currently 14 degrees and cloudy. Today will be sunny with a....”


     The non-descript, British-accented voice completes the weather forecast and goes on to read the news. Before, Saloua would search for the weather forecast or trawl the latest headlines manually – that is, as long as her laptop had enough battery life and hadn’t been misplaced. Now, these mundane tasks have been condensed into four words.
     Lennart’s mornings follow a similar pattern. His Google Home reminds him of that day’s appointments according to his agenda, and what the weather is like. Unlike Saloua, whose main use of the device is in the morning, Lennart uses the device throughout the day. As a self-described “early adopter” who likes anything that does “cool stuff”, he uses his Google Home to run searches for train times and help out in the kitchen, too. Frequently, he will add things to shopping lists, look up recipes and set cooking timers.
     The Amazon Alexa that sits atop Petrushka’s cabinet in her open-plan living room also comes in handy when she is cooking. Alexa will remind her when a certain amount of time has elapsed, prompting her to turn off the oven. She also uses the timer if one her three children has committed a punishable offence. Silently counting down their time in the ‘naughty corner’, Alexa will instruct them when their five minutes of confinement is up. By setting exact timers, the guess-work is taken out of cooking, and minor acts of discipline. With Alexa’s assistance, Petrushka tries to simplify her daily routine in the house. But Alexa not only acts as a timekeeper when her kids are being punished. It also acts as a digital playmate. It reads interactive stories and tells them jokes when they ask it to. Petrushka has two Alexa devices in her home: one downstairs, which also contains a camera, and one upstairs in her twin daughters’ room. Her husband, whom she jokingly refers to as a “child” for his love of all things with buttons and wires, bought the devices and set them up in the house. Petrushka recounts how, initially, she struggled to see the point of the talking speaker, but once it was connected to some smart light bulbs, she slowly began to see its worth. What started as a simpler way to turn on the lights, now simplifies her daily routine more and more every day.

Alexa is a timekeeper when her kids are being punished, and is also a playmate.

The upside of trust

Although the use of the devices may differ slightly, common among the three cases is the functions the device is required to perform. They are all designed to save time or simplify tasks: hearing the weather in the morning means Saloua knows if she should pack her son’s raincoat. Lennart is less likely to miss an early appointment originally scheduled months ago. And preparing the evening meal is going to run a lot more smoothly if Petrushka sets a timer. In the midst of a busy routine, juggling work and home commitments, these small, time-saving functions have value. Saloua articulates this sentiment perfectly: “It just saves me time, which I think is becoming the most valuable thing there is.” Often, new products enter the marketplace, promising to save us time, or help organize and simplify our hectic lives. Historically, technology was hailed as the ultimate example of this. Consider how microwaves, washing machines, vacuum cleaners and dishwashers changed domestic work. But there is also a feeling that we are more pressed for time than ever before, despite technological advancements and the abundance of ‘life hacks’ out there.
     The profusion of digital technologies around us, and our never-ending fixation on screens, are among the reasons why Lennart bought a Google Home. By using his voice to carry out functions, he can avoid spending too much time on his phone or laptop. It seems strange, however, that the solution to reducing the use of certain technology is to get more technology.
     Saloua and Petrushka also demonstrate how technologies can be used to take more control over our digital lives and, in these cases, their children’s. Petrushka likes the fact that Alexa can read a story to her kids, reducing the amount of time they spend in front of TV screens. This helps exercise their imaginations, much more than spoon-fed images do. Together with her husband, Saloua curated a Netflix channel for her four-year old son, to prevent him from watching anything he came across in the standard channel. Such instances show how technology can be utilized to safeguard against the supposed ills of other technologies.

Lennart's household and family. Illustration: Ewout van Lambalgen.

Lennart

Device: Google Home
Location: Between the kitchen and the living room
Profile: ‘Early adopter’ (33 years) who likes anything that does ‘cool stuff’
Family composition: Lennart, girlfriend

How he uses it
Lennart uses the device throughout the day. His Google Home reminds him of that day’s appointments according to his agenda, and what the weather is like. He uses his Google Home to run searches for train times and help out in the kitchen, too. Frequently, he will add things to shopping lists, look up recipes and set cooking timers.

Noteworthy
By using his voice to carry out functions, he can avoid spending too much time on his phone or laptop. Privacy was a particular concern for Lennart. Paradoxically, he brought a Google product with a built-in listening device into his living room.

Saloua's household and family. Illustration: Ewout van Lambalgen.

Saloua

Device: Google Home
Location: The middle of her living space so she had the biggest range to use the device
Profile: Having studied engineering, Saloua (33 years) works for a large bank in the field of smart chip technology. She is interested in tech that can make her life easier.
Family composition: Saloua, husband and son (4 years)

How he uses it
Main use of the device is in the morning. Saloua comes down the stairs each morning, enters her living room, and greets her Google Home. Voice reads weather and headlines. “It just saves me time, which I think is becoming the most valuable thing there is.”

Noteworthy
A clear set of self-inflicted rules guide Saloua’s behavior when it comes to technology. These rules help her justify the presence of technologies in her life. Reading a story to her child is something she considers as a human activity and the idea that a machine could replace this did not sit well with Saloua.

Petrushka's household and family. Illustration: Ewout van Lambalgen.

Petrushka

Device: Amazon Alexa
Location: One downstairs (atop cabinet in her open-plan living room), which also contains a camera, and one upstairs in her twin daughters’ room.
Profile: Bachelor’s degree in History and Archeology, currently unemployed and raising up her three young children.
Family composition: Petrushka, husband and three kids

How he uses it
With Alexa’s assistance, Petrushka tries to simplify her daily routine in the house. Handy when she is cooking. Alexa will remind her when a certain amount of time has elapsed, prompting her to turn off the oven.

Noteworthy
Alexa acts as a timekeeper when her kids are being punished. Silently counting down their time in the ‘naughty corner’, Alexa will instruct them when their five minutes of confinement is up. It also acts as a digital playmate. It reads interactive stories and tells them jokes when they ask it to, reducing the amount of time they spend in front of TV screens.

Facing the fear

The heightened fear of the potentially damaging effects of new technologies comes from the ‘unknown’. No one is clear about the severity of the risk they pose. Privacy was a particular concern for Lennart. Recently, he had stopped using the Google services Gmail, Chrome and search, to prevent the company from painting a clear picture of who he is and targeting ads accordingly. Paradoxically, he brought a Google product with a built-in listening device into his living room. He used his withdrawal from other Google services as justificationIt is highly interesting that basically all the participants set their own, sometimes even remarkable boundaries, in order to justify the presence of technology in their homes. This is apparently how we tend to deal with potential risks. for owning the device. Additionally, he has a very clear understanding of how the voice interface works, and its automatic shut-off functionality, and feels reassured by this.
     Petrushka is not fazed by the notion that Amazon may be listening to her, and even recounts evidence she had seen online claiming that microphones are listening to our every word. “If you can’t say it out loud, don’t say it at all,” she tells me. However, placing the device with a camera down-stairs, and not in her daughters’ room, was intentional. She feared that someone could see her children. Both Lennart and Petrushka perceived some risk associated with these devices. They knew it could infringe on their privacy. While these feelings were not based on concrete knowledge, Lennart and Petrushka are still doing what they can to mitigate the potential risk their devices pose. They created boundaries and limits that justified the presence of the technology in their homes. They used their individual agency to counteract the perceived dangers that come with their new devices and comfort themselves.
     In the same way, Saloua established personal limits when it comes to navigating the digital world. For instance, she tells me that she refuses to post images of her son on social media. As his intellectual property, it is not up to her to spread his image. If she wants to share images with family and friends, she has private platforms with which to do so. She recounted only one occasion on which she became angry that technology was infringing on her life. Again, this was focused on her young child. As a gift from her partner’s brother, her son had received a storybook with beautiful illustrations and an automated speaker that could read the story: “I hate that thing,” she tells me. Reading a story to her child was something she saw as a human activity, and the idea that a machine could replace this does not sit well with Saloua. Anything that has a “real” human quality, such as time with her loved ones, is off-limits to automation. Again, a clear set of self-inflicted rules guide Saloua’s behavior when it comes to technology. These rules help her justify the presence of technologies in her life.

People navigate their digital lives and set limitations to justify technology’s presence in their homes.

The steps beyond

Petrushka and Lennart both discussed limits to the extent that AI and automated algorithms could feature in their lives. If intelligent devices could eliminate the mundane, repetitive tasks of the daily routine, then Petrushka would be more than happy. This would allow her to “focus on being a person.” Lennart was a lot more specific, citing any areas to do with health and finance as outside of the remit of what he trusts AI to do. Continuing this train of thought, Lennart expressed that he would not want to own an AI device with a camera. All the participants not only had individual limits on the way they use technology in the present, they all had ideas about how much AI could enter their lives in the future.
     The digital world is both rapidly evolving and has some sense of risk associated with it. And our ability to let our imaginations run wild about its potential may make it seem more risky. But the majority of people in the West now need it to some degree. This risky and changing landscape comes without rules, and without a standard or ‘proper’ way to do things. Those I interviewed all made their own set of rules. Be it stepping away from Google, not posting personal things on social media or not allowing cameras in certain areas of the house, we exercise reflective action to justify the presence of technologies. These expressions of agency show that people mold and shape the way technology features in their lives. Technology is not a self-determining force, separate from society. Instead, the way it is used shapes its meaning, and ultimately, both how it is perceived and what it actually is.
     This idea can be extended to your own use of digital technology. Do you cover up your computer’s webcam with a sticker? Do you know people who do? Why do they do this? Technology enables us to navigate it in different and particular ways. So that, within limits, we can always exert our individual agency in our handling of it. When faced with uncertainty about potential risks, we can do things to help us feel more trusting. Ultimately, we may not know everything about the workings of technology, or the policy of the large companies that make it (who may not be completely transparent), but we do trust ourselves, and many of those around us. Researching the way people use and feel about new technologies demonstrates that by trusting ourselves and others, we exercise what power we have to make sense of the technologies in our lives.


References

  1. Beck, U. (1992), “Risk Society: Towards a New Modernity”. Sage Publication, London.
  2. Giddens, A. (1991), “Modernity and Self-Identity: Self and Society in the Late Modern Age”. Polity Press, Cambridge.
  3. Giddens, A. (1990), “The Consequences of Modernity”. Polity Press, Cambridge.

April 2019


How We Come to Trust AI in Our
Daily Lives



Published in Dutch on:
Emerce

You would have a hard time finding anyone in the Western world whose life has not been altered by digital technology in the last 20 years. Since the birth of the Internet (as we know it) in 1990, we have seen the emergence of, among countless others, email, laptops, smartphones, apps, tablets, VR and self-driving cars. And who knows what next year will bring, or the year after that?

Author: Oliver Blake

T

The digital landscape is in a state of constant flux. More aspects of our lives are becoming digitalized: interactions with organizations and business, relationships with friends, the way politics and activism are organized or the way we search for love. But this relatively new state of affairs does not come with a rulebook that tells us how to behave. This research – focusing on everyday interactions with artificial intelligence – begins to uncover the ways in which people navigate their digital lives, setting boundaries and limitations in order to justify technology’s presence in their homes and pockets. And the ways in which they balance the risks that might come with it.

Beware the killer robots

Of all the recent technological developments, one has captured our collective attention like no other: artificial intelligence (AI). Whether it drums up terrifying images from apocalyptic Hollywood films featuring robots running amok, or excites us about the potentials of playing God, the very notion of AI is creating a buzz. Silicon Valley hails it as the next ‘big thing’. News and media would have us think that AI will kill us all or take our jobs. That kind of discourse positions the development of AI as something entwined with risk, putting both our economic stability and the survival of the human species in jeopardy. It is not unusual for new developments to have their naysayers, detractors and scaremongers. Such voices paint a picture of AI being something fundamentally risky, but risk is something seen throughout society in various shapes and sizes.
     Risk is not exclusive to the field of AI and digital technology. Modern society has become so complex, and we rely so entirely on its complexities, that potentially negative outcomes are to be found everywhere. The outcomes of mass power outages, computing hacks and nuclear fallout are beyond conception. We only know they are catastrophic. This state of affairs led German sociologist Ulrich Beck to characterize the modern world as a ‘risk society’.1 For Beck, modern risks are hidden and beyond our conceptions, so that we are preoccupied by ‘what if’ questions and constantly guessing which risks to occupy our minds with. In recognizing the widespread risks of modern life, British sociologist Anthony Giddens explains that trust is essential for society to function.2 In our complex societies, we must rely on an abstract trust to keep us safe. We are dependent on experts we do not know and will never meet. The manner in which risk and trust respond to one another formed the theoretical base for my approach to the research.

The voice for choice

Wether we choose to trust or distrust AI will shape the role it plays in our lives through the coming decades. No one quite knows what the future will bring, or what course developments in artificial intelligence will take. But what we can say for certain is that the technology will continue to advance, and that machine-learning algorithms are already finding their ways into our everyday lives. How we respond to new technologies today will create their tomorrows. The key to a future in which both user and creator can properly use AI, is to first fully understand the ways AI is used in our lives, and how people come to trust these technologies, despite the sensational headlines.
     Now that artificial intelligence is more widespread, we can begin to unpack some of the issues raised in the media through research. One piece of technology – the voice interface – presents an interesting angle from which to study how AI features in people’s everyday lives.
     Recently, the use of conversational interface devices in the home has become more and more common. People are bringing machines into their most personal spaces to assist them with their daily tasks. These artificially intelligent devices allow users to perform a variety of functions by speaking to the device. Google and Amazon products currently dominate this market. In three-and-a-half years, Amazon sold over 20 million Echo units, while Google’s equivalent device – Google Home – is approaching five million in total sales. In Amazon’s own promotional material for Alexa, they explain that conversational AI can: “communicate in ways that feel natural, solve problems, and get smarter over time.”
     But the ascent of these devices has not been a smooth one. Controversies about the devices often feature in the news, and fresh stories continue to appear. Such reports revolve around the idea that the devices listen to you constantly, storing the data for future use, allowing Google and Amazon to paint complete portraits of consumers. Some kind of ‘digital corporate spy’ that we invite into our homes. The nature and popularity of these devices create an interesting way to study the way people are starting to use AI, and how trust is formed in interactions with new, potentially risky, machines.

Asking the early adopters

To conduct this research, I set out to speak to people who live alongside these machines. Studying the people who interact with the device allowed me to understand people’s motivations for owning an voice interface, how the devices are used, if the users associate risk with AI, and how young children are raised in an environment with a talking speaker that has a name and is ‘intelligent.’ I conducted three in-depth interviews. Petrushka, Lennart and Saloua all live in Dutch cities and can be described as middle-class, well educated, highly tech-literate, with an international, cosmopolitan background. In two of these instances, I saw first-hand how the devices were used in the home. Two participants own a Google Home and the third uses Amazon’s Alexa product. Seeing the devices in use allowed me to recognize how they work and what positions they take in people’s daily lives. Detailed descriptions, drawn from these interviews, are the basis for my understanding of how people live with AI and set their own rules for negotiating the digital world.
     By speaking to those who own voice interfaces, it became clear that a degree of trust in the device exists long before it is unpackaged. Lennart and Saloua had made the decision to get a Google Home, meaning they both put thought and time into the process. Having studied engineering, Saloua works for a large bank in the field of smart chip technology. She is interested in tech that can make her life easier and, after she read an article comparing two popular voice interfaces, she decided the Google Home was for her. Lennart had tried his friend’s Amazon Alexa a couple of times before he got a Google Home. A podcast about the devices supported his decision. As a regular listener, he trusts the host’s opinions. Finally, Petrushka did not buy the Amazon Alexa that sits in her living room. Her husband did. However, it would be a fair assumption that she trusts her husband not to bring a harmful object into the house and into contact with their children. In all three cases, it is evident that the trust that inspired them to own these products comes from multiple places, and cannot be considered in isolation. It is wrapped up in complex social relationships and existing trust networks.
     For two of the participants, the device could be described as a digital assistant that they have tailored to make their routine morning rush run a little smoother. Saloua, in her early 30s, comes down the stairs each morning, enters her living room, and greets her Google Home:


“Hey Google, good morning.”

“Hey Saloua, the time is 7:56am. Utrecht is currently 14 degrees and cloudy. Today will be sunny with a....”


     The non-descript, British-accented voice completes the weather forecast and goes on to read the news. Before, Saloua would search for the weather forecast or trawl the latest headlines manually – that is, as long as her laptop had enough battery life and hadn’t been misplaced. Now, these mundane tasks have been condensed into four words.
     Lennart’s mornings follow a similar pattern. His Google Home reminds him of that day’s appointments according to his agenda, and what the weather is like. Unlike Saloua, whose main use of the device is in the morning, Lennart uses the device throughout the day. As a self-described “early adopter” who likes anything that does “cool stuff”, he uses his Google Home to run searches for train times and help out in the kitchen, too. Frequently, he will add things to shopping lists, look up recipes and set cooking timers.
     The Amazon Alexa that sits atop Petrushka’s cabinet in her open-plan living room also comes in handy when she is cooking. Alexa will remind her when a certain amount of time has elapsed, prompting her to turn off the oven. She also uses the timer if one her three children has committed a punishable offence. Silently counting down their time in the ‘naughty corner’, Alexa will instruct them when their five minutes of confinement is up. By setting exact timers, the guess-work is taken out of cooking, and minor acts of discipline. With Alexa’s assistance, Petrushka tries to simplify her daily routine in the house. But Alexa not only acts as a timekeeper when her kids are being punished. It also acts as a digital playmate. It reads interactive stories and tells them jokes when they ask it to. Petrushka has two Alexa devices in her home: one downstairs, which also contains a camera, and one upstairs in her twin daughters’ room. Her husband, whom she jokingly refers to as a “child” for his love of all things with buttons and wires, bought the devices and set them up in the house. Petrushka recounts how, initially, she struggled to see the point of the talking speaker, but once it was connected to some smart light bulbs, she slowly began to see its worth. What started as a simpler way to turn on the lights, now simplifies her daily routine more and more every day.

Alexa is a timekeeper when her kids are being punished, and is also a playmate.


The upside of trust

Although the use of the devices may differ slightly, common among the three cases is the functions the device is required to perform. They are all designed to save time or simplify tasks: hearing the weather in the morning means Saloua knows if she should pack her son’s raincoat. Lennart is less likely to miss an early appointment originally scheduled months ago. And preparing the evening meal is going to run a lot more smoothly if Petrushka sets a timer. In the midst of a busy routine, juggling work and home commitments, these small, time-saving functions have value. Saloua articulates this sentiment perfectly: “It just saves me time, which I think is becoming the most valuable thing there is.” Often, new products enter the marketplace, promising to save us time, or help organize and simplify our hectic lives. Historically, technology was hailed as the ultimate example of this. Consider how microwaves, washing machines, vacuum cleaners and dishwashers changed domestic work. But there is also a feeling that we are more pressed for time than ever before, despite technological advancements and the abundance of ‘life hacks’ out there.
     The profusion of digital technologies around us, and our never-ending fixation on screens, are among the reasons why Lennart bought a Google Home. By using his voice to carry out functions, he can avoid spending too much time on his phone or laptop. It seems strange, however, that the solution to reducing the use of certain technology is to get more technology.
     Saloua and Petrushka also demonstrate how technologies can be used to take more control over our digital lives and, in these cases, their children’s. Petrushka likes the fact that Alexa can read a story to her kids, reducing the amount of time they spend in front of TV screens. This helps exercise their imaginations, much more than spoon-fed images do. Together with her husband, Saloua curated a Netflix channel for her four-year old son, to prevent him from watching anything he came across in the standard channel. Such instances show how technology can be utilized to safeguard against the supposed ills of other technologies.

Lennart's household and family. Illustration: Ewout van Lambalgen.

Lennart

Device: Google Home
Location: Between the kitchen and the living room
Profile: ‘Early adopter’ (33 years) who likes anything that does ‘cool stuff’
Family composition: Lennart, girlfriend

How he uses it
Lennart uses the device throughout the day. His Google Home reminds him of that day’s appointments according to his agenda, and what the weather is like. He uses his Google Home to run searches for train times and help out in the kitchen, too. Frequently, he will add things to shopping lists, look up recipes and set cooking timers.

Noteworthy
By using his voice to carry out functions, he can avoid spending too much time on his phone or laptop. Privacy was a particular concern for Lennart. Paradoxically, he brought a Google product with a built-in listening device into his living room.

Saloua's household and family. Illustration: Ewout van Lambalgen.

Saloua

Device: Google Home
Location: The middle of her living space so she had the biggest range to use the device
Profile: Having studied engineering, Saloua (33 years) works for a large bank in the field of smart chip technology. She is interested in tech that can make her life easier.
Family composition: Saloua, husband and son (4 years)

How he uses it
Main use of the device is in the morning. Saloua comes down the stairs each morning, enters her living room, and greets her Google Home. Voice reads weather and headlines. “It just saves me time, which I think is becoming the most valuable thing there is.”

Noteworthy
A clear set of self-inflicted rules guide Saloua’s behavior when it comes to technology. These rules help her justify the presence of technologies in her life. Reading a story to her child is something she considers as a human activity and the idea that a machine could replace this did not sit well with Saloua.

Petrushka's household and family. Illustration: Ewout van Lambalgen.

Petrushka

Device: Amazon Alexa
Location: One downstairs (atop cabinet in her open-plan living room), which also contains a camera, and one upstairs in her twin daughters’ room.
Profile: Bachelor’s degree in History and Archeology, currently unemployed and raising up her three young children.
Family composition: Petrushka, husband and three kids

How he uses it
With Alexa’s assistance, Petrushka tries to simplify her daily routine in the house. Handy when she is cooking. Alexa will remind her when a certain amount of time has elapsed, prompting her to turn off the oven.

Noteworthy
Alexa acts as a timekeeper when her kids are being punished. Silently counting down their time in the ‘naughty corner’, Alexa will instruct them when their five minutes of confinement is up. It also acts as a digital playmate. It reads interactive stories and tells them jokes when they ask it to, reducing the amount of time they spend in front of TV screens.

Facing the fear

The heightened fear of the potentially damaging effects of new technologies comes from the ‘unknown’. No one is clear about the severity of the risk they pose. Privacy was a particular concern for Lennart. Recently, he had stopped using the Google services Gmail, Chrome and search, to prevent the company from painting a clear picture of who he is and targeting ads accordingly. Paradoxically, he brought a Google product with a built-in listening device into his living room. He used his withdrawal from other Google services as justification for owning the device. Additionally, he has a very clear understanding of how the voice interface works, and its automatic shut-off functionality, and feels reassured by this.
     Petrushka is not fazed by the notion that Amazon may be listening to her, and even recounts evidence she had seen online claiming that microphones are listening to our every word. “If you can’t say it out loud, don’t say it at all,” she tells me. However, placing the device with a camera down-stairs, and not in her daughters’ room, was intentional. She feared that someone could see her children. Both Lennart and Petrushka perceived some risk associated with these devices. They knew it could infringe on their privacy. While these feelings were not based on concrete knowledge, Lennart and Petrushka are still doing what they can to mitigate the potential risk their devices pose. They created boundaries and limits that justified the presence of the technology in their homes. They used their individual agency to counteract the perceived dangers that come with their new devices and comfort themselves.
     In the same way, Saloua established personal limits when it comes to navigating the digital world. For instance, she tells me that she refuses to post images of her son on social media. As his intellectual property, it is not up to her to spread his image. If she wants to share images with family and friends, she has private platforms with which to do so. She recounted only one occasion on which she became angry that technology was infringing on her life. Again, this was focused on her young child. As a gift from her partner’s brother, her son had received a storybook with beautiful illustrations and an automated speaker that could read the story: “I hate that thing,” she tells me. Reading a story to her child was something she saw as a human activity, and the idea that a machine could replace this does not sit well with Saloua. Anything that has a “real” human quality, such as time with her loved ones, is off-limits to automation. Again, a clear set of self-inflicted rules guide Saloua’s behavior when it comes to technology. These rules help her justify the presence of technologies in her life.

People navigate their digital lives and set limitations to justify technology’s presence in their homes.


The steps beyond

Petrushka and Lennart both discussed limits to the extent that AI and automated algorithms could feature in their lives. If intelligent devices could eliminate the mundane, repetitive tasks of the daily routine, then Petrushka would be more than happy. This would allow her to “focus on being a person.” Lennart was a lot more specific, citing any areas to do with health and finance as outside of the remit of what he trusts AI to do. Continuing this train of thought, Lennart expressed that he would not want to own an AI device with a camera. All the participants not only had individual limits on the way they use technology in the present, they all had ideas about how much AI could enter their lives in the future.
     The digital world is both rapidly evolving and has some sense of risk associated with it. And our ability to let our imaginations run wild about its potential may make it seem more risky. But the majority of people in the West now need it to some degree. This risky and changing landscape comes without rules, and without a standard or ‘proper’ way to do things. Those I interviewed all made their own set of rules. Be it stepping away from Google, not posting personal things on social media or not allowing cameras in certain areas of the house, we exercise reflective action to justify the presence of technologies. These expressions of agency show that people mold and shape the way technology features in their lives. Technology is not a self-determining force, separate from society. Instead, the way it is used shapes its meaning, and ultimately, both how it is perceived and what it actually is.
     This idea can be extended to your own use of digital technology. Do you cover up your computer’s webcam with a sticker? Do you know people who do? Why do they do this? Technology enables us to navigate it in different and particular ways. So that, within limits, we can always exert our individual agency in our handling of it. When faced with uncertainty about potential risks, we can do things to help us feel more trusting. Ultimately, we may not know everything about the workings of technology, or the policy of the large companies that make it (who may not be completely transparent), but we do trust ourselves, and many of those around us. Researching the way people use and feel about new technologies demonstrates that by trusting ourselves and others, we exercise what power we have to make sense of the technologies in our lives.


References

  1. Beck, U. (1992), “Risk Society: Towards a New Modernity”. Sage Publication, London.
  2. Giddens, A. (1991), “Modernity and Self-Identity: Self and Society in the Late Modern Age”. Polity Press, Cambridge.
  3. Giddens, A. (1990), “The Consequences of Modernity”. Polity Press, Cambridge.