Can you imagine one day having a virtual colleague?
Or a “machine teammate” capable of facilitating or improving our human activity or the organization itself?
What is the challenge here?
Technological advances and the uses of these advances suggest that machines are already playing an important role in improving the productivity, safety and more generally the efficiency of a system.
If machines are already in use, it is because they are able to perceive and act in a system relatively autonomously. In the field of IoT, we observe that machines alone, or when they collaborate with other machines (M2M), develop and reach high-performance levels.
When we shift the focus on Human-Machine(s) interactions and more particularly on collaborative interactions within hybrid teams, it appears that machines are not yet at a level of development allowing a fluid and efficient collaboration.
In view of our open and distributed production context, in the field of digital transformation and with an ambition of developing a CaaS (“Community as a Service” business model), the improvement of Human(s)-Machine(s) interactions) is one of our research topics. As such, we have read the article by Stowers et al. (2021) which deals with Human(s)-Machine(s) interactions in teams. Let’s quickly summarize what we have learned and how these elements can be of great interest!
The article provides an innovative understanding of Human(s)-Machine(s) interactions in the “team system”, through the scientific prism of teamwork (Salas et al., 2018; Stowers et al., 2021 ), which has rarely been done. It highlights that new technologies and their uses have not been developed taking into account the skills necessary for teamwork.
To optimize the efficiency of the teams of tomorrow, while maintaining a high level of safety and comfort in interactions, it seems important to know which skills related to teamwork must be developed and implemented for our future “teammates”. In their article, Salas et al. (2018) identified skills that can be transposed into most teams: (1) communication, (2) coordination and (3) adaptability. By importing these skills related to teamwork into Human(s)-Machine(s) interactions, Stowers et al. (2021) pose in their article interesting challenges that will have to be met if we want to evolve towards efficient systems and performing teams.
Traditionally in the literature on teamwork, the communication process corresponds to the exchange of information (the sending and receiving of information) between one or more members of the team.
The literature shows a positive effect of communication on collective performance, the development and maintenance of shared mental models (SMM), analysis and task planning. In the Human(s)-Machine(s) teams, the communication process was understood via the quality of the interface allowing an agent to understand the intentions, future plans, performances and reasoning processes of another agent.
In this process of Human(s)-Machine(s) communication, the literature reports recent advances in the fluidity of information sharing and the possibility of facilitating the coordination of tasks through the use of features such as “turn-taking” or the possibility for machines to recognize human language (Tellex et al., 2020). In the article analyzed here, the authors point to the concept of trust as central to the quality of Human(s)-Machine(s) interactions.
With a view to developing this feeling of trust between People and Machines, the authors highlight the need to develop machines capable of delivering quality information, in a structured process and via a reassuring interface.
To conclude this section on communication, the authors point out that the communication process is based on a two-way relationship where humans and machines are able to transmit and understand the information exchanged.
The coordination process refers to the organization of the skills, knowledge and behaviors of team members to achieve a specific goal.
Within the Human(s)-Machine(s) teams, coordination corresponds more to the management of dependencies between the different activities of the different agents.
The quality of Human(s)-Machine(s) coordination is based on :
the ability of team members to predict the behavior of others,
the sharing of common knowledge with respect to the past and the present
the ability of members to reorient themselves and provide help to another member.
These qualities of reliability, communication, orientation and recognition of intentions determine that the machine becomes a “good coordinator” and participates in the creation and maintenance of SMMs and trust.
One of the main challenges of Human(s)-Machine(s) interactions is to allow machines to manage modes of implicit coordination traditionally at work in teams. This implicit coordination reduces the workload and allows teams to be focused on their task with minimal external distraction. At present, machines are too underdeveloped to detect, understand and make sense of the contextual clues which favor this implicit coordination.
Improving Human(s)-Machine(s) interactions within teams will depend on the capacity of machines to perceive human activities and to give them meaning in a perspective of coordinating activities. This notion is directly related to the last point: adaptability.
Adaptability corresponds to the ability of team members to modify their behavior according to changes linked to the evolution of the task, the project, the team or context.
From the point of view of interactions within the Human(s)-Machine(s) teams, we can distinguish two approaches:
an approach where adaptability is controlled by humans
an approach where the adaptability is controlled by the machine.
In this second approach, the main challenge is to be able to allow the machine to detect changes in the evolution of the situation in order to trigger adaptive mechanisms. The authors also point out that one of the interesting challenges is to be able to allow machines to detect internal changes to the team or external related to the context before they appear.
In this perspective, the improvement of Human(s)-Machine(s) interactions is based on the capacity of machines to learn in context and to reuse this learning in an appropriate manner (Machine Learning + Artificial Intelligence). These innovations should allow machines to no longer simply recognize the knowledge and behaviors of team members, but also to be able to anticipate this new knowledge and human behavior.
From Theory to Practice :
In our context of open, agile and distributed production, we encounter more and more situations where interactions within Human(s)-Machine(s) teams are decisive for our performance. As such, we are currently carrying out experiments on these three skills to facilitate interactions between our platform and team members.
On one side, we were able to implement several robots capable of generating information from the data present in our system (RPA). In 2019 we had around 80 automated jobs already :
On the other side, we have been heavily studying communication means
between humans (both formal and informal means)
at various levels (projects, organization, individual relationships)
through various channels (Email, Instant Messaging, Phone, IRL)
and various contexts (sharing the same office, culture, or not at all)
For example :
And in 2022 we will be working on the implementation of conversational agents in order to strengthen the communication process by seeking to maintain two aspects as a priority:
the bidirectionality of the communication process
the optimization of trust by the precision of the information exchanged.
With our daily need of distributing work on a large scale, we are constantly confronted with the problem of coordinating activities.
Most of the “digital agencies” have a dedicated (human) role for “planning management” and they all love Gantt charts :-). We don’t.
For over the last few years, we have been using a pull model, where all the people have a personalized dashboard related to their role(s) in the community.
They are able to choose the tasks they prefer.
Our first experiments allowed us to be able to implement robots capable of facilitating the management and distribution of tasks. We are currently working on improving our production process in order to gain efficiency, in particular on the aspects of interdependence between tasks.
While overall the activity is facilitated, we still observe a significant workload in terms of piloting activities : one of the reasons identified is that of emergency management and prioritization of tasks in our open and distributed context.
Our initial work on this topic was to identify all the possible causes.
We have sought to facilitate this piloting activity by improving the adaptability of our system. In other areas of activity (energy and transport fleet management), we have tested explicit coordination methods where the information transmitted is mainly used for human decision-making. In a perspective of Human(s)-Machine(s) collaboration, it will be a question of being able to give more autonomy to the machines in order to facilitate the overall coordination of the system.
We are currently testing the adaptability of our system by allowing Human(s)-Machine(s) interaction during times of emergency when tasks need to be treated as a priority.
We observed that contextual changes disrupted our production activity and reduced the comfort of our teams.
We are experimenting with the possibility of allowing robots to anticipate these contextual changes by monitoring activity in real time and by defining thresholds which trigger some communication (email alert, conversational bot, etc.).
In order to improve Human(s)-Machine(s) interactions and to develop a form of reactivity and therefore adaptability, we are also currently working on the gamification process of the activity (prioritization badge, point by task, specific status of the contributor, ...).
We will talk again about these aspects of Community Management in 2022.
What about Trust?
The authors point to the concept of trust as central to the quality of Human(s)-Machine(s) interactions. (Tellex et al., 2020)
We have spoken a lot about trust all along this post. However we have not defined it.
What is it exactly? How do you define “Trust”? Some kind of scoring? and for which purpose? Moreover it does not appear in the Human(s)-Machine(s) Team model we saw earlier.
In social science, trust is tightly related to psychology and sociology. Not to bots :) And that’s were we start understanding the huge challenge here.
Depending on your level of trust (as a trustor) towards your teammate (trustee) you will inevitably
Adjust your communication (level, tone, content, frequency)
Adapt your behaviour (more precautions, more empathy, etc.)
Coordinate with a smaller granularity (low level of trust = higher risk probability feeling)
New model and next steps
That’s why we would like to suggest a new generic model here in order to highlight the importance of trust :
In the future, we would like to implement and study some “trust scoring” for both humans AND bots so that we can create :
a better contextual model depending of this level of trust.
better behavior rules for adaptation / communication / coordination given this trust context
For example :
Someone who just joined the community can not be trusted by default, at least regarding all the processes and cultural knowledge that he would acquire later (Distributed onboarding in IT Teams is another great research challenge and one of our topics) and takes a lot of time & effort but this is another topic).
Same level for a robot based on Machine Learning and which has not been trained enough yet (Eg : Deep learning for TimeSeries forecasting is another interesting research topic).
Suggest to someone to communicate with someone else when an event is triggered or a threshold is passed. For example : use NLP (Natural Language Processing) to detect some frustration from someone in a community and suggest to the given customer success manager of this community to communicate & check directly what’s happening there.
Trust scoring could then be updated by all the monitored activity of both humans and bots and their teaming interactions (Gamification topics related to Community management challenges like here).
That’s all for today and for 2021 :)
Happy new year my dear bot!
Special Thanks to Thibault Kerivel for this collaborative post.
Salas, E., Reyes, D. L., and McDaniel, S. H. (2018). The science of teamwork: progress, reflections, and the road ahead. Am. Psychol. 73, 593–600. doi: 10.1037/amp0000334
Stowers K, Brady LL, MacLellan C, Wohleber R and Salas E (2021) Improving Teamwork Competencies in Human-Machine Teams: Perspectives From Team Science. Front. Psychol. 12:590290. doi: 10.3389/fpsyg.2021.590290
Tellex, S., Gopalan, N., Kress-Gazit, H., and Matuszek, C. (2020). Robots that use language. Annu. Rev. Control Robot. Auton. Syst. 3, 25–55. doi: 10.1146/ annurev-control-101119-071628