The rise of generative AI tools has ushered in a new era in modern businesses, bringing with it untapped potential as well as unprecedented challenges. This technology has exploded onto the scene, and there are no signs that it will be slowing down anytime soon. Some organizations have decided that the risk of these tools is simply too much, and they've decided to ban tools like ChatGPT and Bard entirely.
Effectively and safely implementing generative AI tools requires more than just technical expertise. As cybersecurity professionals, this is a significant part of our work, finding ways to enable while protecting. It brings together critical thinking and the deployment of a range of soft skills, all with a focus on helping our organizations safely embrace this new wave of technology. This article delves into understanding generative AI tools and emphasizes the role of these non-technical skills in empowering organizations to harness AI's transformative power safely.
Understanding Generative AI Tools
Generative AI tools use machine learning algorithms to generate new data similar to the input data it was trained on. There are new tools coming out every single week that span a huge range of use cases, from image generation, text generation, chatbots, software development, and so much more.
These tools promise some incredible efficiency gains. This is especially true when someone's skills are augmented and expedited through generative AI tools. But it's not all sunshine and rainbows. These tools can present some nontrivial challenges. As an example, the output can sometimes be unpredictable, wildly incorrect, or inappropriate, potentially posing risks if used unchecked. It's here that critical thinking and soft skills become crucial.
The Importance of Soft Skills in Navigating AI Adoption
As AI becomes more prevalent, soft skills – those that facilitate human interaction and understanding – become ever more vital. To me, the key soft skills necessary for this kind of organizational change include critical thinking, communication, leadership, adaptability, and problem-solving. Let's look at each one a bit more closely.
Because generative AI tools are covering such a wide range of use cases, we must be able to think critically about the risks and opportunities involved. What if a tool we've integrated into a critical business process starts to produce blatantly incorrect statements? How would we know? What would we do about it? How does that impact us and our customers or mission?
Communication is essential in expressing the benefits, challenges, and ethical implications of AI tools to all stakeholders. It's also a key part of user research that is necessary to understand what teams are hoping to accomplish.
Leadership is needed to steer the organization through the transformative process. This isn't just about the CISO. People with an interest in generative AI have an opportunity to lead through influence and coalition building within their organizations. Seth Godin's book Tribes comes to mind here.
Adaptability is required to cope with the rapid pace of change inherent in AI technology. This AI boom is moving incredibly quickly, and the interest in adopting these solutions is keeping pace. Security teams run the risk of being "bolted on" for another wave of new technology if they don't proactively engage.
Problem-solving, particularly creative and critical approaches, is invaluable in tackling unexpected issues that arise during the rollout of any new technology. Problems are inevitable, and they're likely to involve a unique context to your organization.
Looking at Them All Together
For instance, when a generative AI tool like ChatGPT starts producing subtly incorrect outputs, it's integrated into a function like marketing or sales pipeline management. It takes a team with critical thinking skills to assess and identify the issue and then communicate these issues effectively to leadership to decide on the next steps. That team must be adaptable and ready to change course if needed.
Empowering Organizations through AI: The Soft Side
A successful AI transition requires a culture and mindset shift. Working in the cybersecurity team, your focus should be on enabling your peers to do their work securely and efficiently. The extreme sides of this look like total bans of the technology and then a free-for-all rush to the latest tools being promoted on social media.
Neither of those situations is likely going to turn out well in this dynamic competitive environment we find ourselves in today.
Talk to your peers, and learn about what they are trying to accomplish, their goals, and their pain points today. What data do they need? Would they need to integrate and connect tools together?
It starts with a team that is focused on enablement and actively curious about the needs of their peers. Don't stay in the security silo. Engage, communicate, ask questions, and learn.
As we move deeper into the era of AI, it's clear that successfully leveraging generative AI tools goes beyond technical expertise. We can't bury our heads in the sand. We also put ourselves at potential risk by blocking them entirely. So we turn towards safely integrating them into our workplace, our processes, and our culture. The importance of critical thinking and soft skills is paramount in helping make this change happen. The future of AI in business contexts is not just about the latest tools and algorithms but also about the humans securely guiding their implementation – and the soft skills they bring to the table.