It’s the end of another year and I’m culling my LinkedIn contacts. It’s not like LinkedIn is charging me for the storage space, or anything. It’s just that I feel like I should kind of know who I’m listing as a contact. Like many, I've been using LinkedIn for years. In the spirit of what I think LinkedIn originally envisioned, I used to limit my acceptance of LinkedIn invitations to people I sincerely knew. I'd only request or accept a connection with a legit human contact.
Fast forward to 2021 and I struggle to recall how I became connected to many of my LinkedIn contacts. I now even question whether some of them are human. If I double click into one of my "mystery contacts'" profiles I can normally discern the connection. As the years peel away, however, it becomes increasingly difficult unless we actually worked 1:1 together. Should I keep them as contacts? Is it valuable to them, me, LinkedIn?
Now, perhaps like you, I'm eyeballs deep in a new and remote work environment--what Forrester calls Anywhere Work. Increasingly my new professional contacts are spawning in a virtual setting. Consequently, I now commonly connect on LinkedIn with colleagues and professional contacts I've never truly met in the flesh. Do I not know them as well? Do I know them better? Is the relationship somehow different from the olden days?
Although I work for Forrester, I’m not an analyst, but I can envision a not-too-distant future where we'll interact with truly lifelike bots in these professional, virtual, collaborative spaces. Perhaps we'll interact with co-bots who "know" us. Maybe we’ll develop a professional kinship these nonhuman identities. Think Luke Skywalker and R2-D2's relationship. How will LinkedIn, companies and other networks evaluate these types of new nonhuman relationships? Human with bot. Bot with bot who also interacts with humans?
Part of what Forrester does is research the business implications of technology like robots and automation. In a 2021 security and governance report (How to Secure And Govern Nonhuman Identities, Forrester Research Inc., January 2021) Forrester cites some numbers about these nonhuman entities as they relate to security and governance challenges for organizations. For purposes of this article I’m writing, however, I’m mainly interested in the general growth numbers of bots and sophisticated automation in professional settings:
To gain an edge with productivity and efficiency gains, businesses and governments are turning to automation, which will affect 80% of jobs by 2030. One of the fastest growing types of automation are software bots that replace or augment business processes; 32% of global infrastructure decision-makers expect that their firms will use robotic process automation (RPA) over the next 12 months. Other types of automation, such as industrial robots and IoT devices, extend the benefits of automation to the physical world. As of 2019, there were 2.25 million robots in the global workforce, twice as many as in 2010. We refer to these types of automation collectively as nonhuman identities, defined as:
Machine-based identities in the workforce performing complex tasks that imitate human decision-making and/or perform tasks in conjunction with human operators. Nonhuman identities include robotic process automation (bots), robots (industrial, enterprise, medical, military), and IoT devices that perform complex tasks or work in conjunction with human operators.
The idea of Dunbar's number and other social research suggest there's a limit to the number of people with whom we can sustain social relationships, or at least the number of people with whom we remain incented to stay in contact. Newer resarch questions Dunbar's number but still challenges the sum of human relationships we can sustain. LinkedIn and other platforms push these human limits.
But what about relationships with bots? I’d think we can have a higher volume of relationships with bots. For instance, I’d think we could meet a new bot, give it access to a more personal connection--a database of who we are, how we like to interact with bots, our purchase history, etc.--so that new bot could instantaneously pick up with us as if they’d known us for years. Even now, human customer service rep strangers can scratch the surface of getting up to speed with us when they pull up our file and read service rep notes.
Because of our increased interactions with machines our sensitivity and social cues are changing. For instance, I now feel comfy chatting with bots during an online customer service exchange or on the phone. But I alter the way I speak (or type) in an unnatural way. I slow down and truncate my speech and sentence structure. I’m not afraid or ashamed of being curt with a bot, especially if I feel it will help to clarify my interaction. I’m accepting of the nonhuman interaction and know it’s our future. I just wonder how far and deep it will carry. And I wonder how it will affect how we manage contacts, like on LinkedIn. Will we revert back to having fewer human relationships (back to the significance of something like a Dunbar’s number)? The explosive growth of robots and automation in the workforce may result in fewer, but deeper human relationships. If it means I can nurture a more managable sphere of human friends I think I like that.