When we talk about the importance of authentic leadership, we often focus on how a leader is perceived and builds trust. But there can be a shadow side to authenticity that can have negative effects, when we see authenticity as a license to express our unfiltered selves.
In his work with executives and research into leadership development, Hannes Leroy has observed a phenomenon that he calls being an “authentic jerk.” The problem, he and Michael A. Daniels, Kristin L. Cullen-Lester, and Alexandra Gerbasi argue, arises when authenticity is shaped by who we think we are rather than an understanding of what we stand for. They lay out a road map for leaders that begins with introspection and centers on identifying and acting from core values. Finding that proverbial North Star and openly doing the hard work of staying aligned with that is what builds trust.
Maintaining alignment between values and actions as a leader (and as an organization) is challenging when countervailing winds blow. In the current political climate, it can be particularly challenging to keep strongly held commitments to environmental stewardship and ensuring opportunities for all when doing so vigorously can make leaders and organizations the targets of those who disagree. Julia Binder and Heather Cairns-Lee offer guidance on how to steer through the current storm wisely, maintaining fidelity to values that employees and stakeholders care about without putting business interests at risk.
When it comes to risks, there are plenty of emerging ones to contend with as algorithms and AI play a larger role in strategy and operations. In recent years, we’ve seen a proliferation of responsible AI frameworks and principles that have helped executives understand how risk can arise in AI development and deployment. However, the ability to effectively manage those risks in practice is lagging. That is largely due to cultural and structural issues in organizations, according to Öykü Işık and Ankita Goswami. If leaders are to effectively manage AI risk, they must clearly define roles in terms of accountability and provide the tools, training, and other resources needed to do the job. And they must make ethical considerations part of decision-making and strategy rather than allowing responsible AI considerations to be an afterthought when developing use cases for the technology.
A particular use case for algorithms that has exposed companies to significant legal risk is pricing. Chris K. Anderson and Fredrik Ødegaard zero in on charges of collusion and price-fixing in a series of lawsuits against vendors of pricing algorithms and companies that use them — in particular, multitenant landlords and hotels. Some cases have been settled, some dismissed, and some remain active. Companies considering how to apply AI to pricing decisions should watch closely and seek counsel to understand the nuances of legal risk.