In recent years, the rise of generative artificial intelligence has driven companies to develop new methods to prioritize their technological projects. The diversity of applications does not pose a problem in itself; the real challenge lies in evaluating the business value against cost, effort, and other relevant considerations. This challenge is particularly complex due to phenomena such as AI "hallucinations," where agents make incorrect decisions, and the rapid evolution of the regulatory landscape. To address these concerns, it is proposed to integrate responsible AI practices into project prioritization strategies.
The architectural approach proposed by AWS defines responsible AI as the design and use of artificial intelligence technology to maximize benefits and minimize risks. This framework identifies eight key dimensions: fairness, explainability, privacy and security, controllability, veracity and robustness, governance and transparency. During the development lifecycle, generative AI teams must assess potential risks in each dimension and implement mitigation measures.
This comprehensive view is especially relevant for generative AI projects, where the risks are new and the mitigations may be unknown. Integrating responsible AI practices from the outset offers a more accurate assessment of risk and reduces the likelihood of costly rework.
Although many companies have their own prioritization methods, the WSJF (Weighted Shortest Job First) methodology of the Scaled Agile system stands out as a valuable option. This method calculates priority by dividing the cost of delay by the size of the job, where the cost of delay evaluates business value in terms of urgency and opportunities, and the size of the job considers the required effort.
To illustrate, two generative AI projects can be compared: one focused on generating product descriptions using a language model, and the other on creating visuals for advertising campaigns with text-to-image models. Without considering responsible AI, the second project seems more urgent. However, by incorporating a risk assessment, the first project shows lower complexity and mitigation costs, making it more viable.
Developing responsible AI policies is crucial for companies not only to avoid additional costs, but also to build trust among customers and comply with new regulatory frameworks. This approach could significantly alter the way generative AI projects are prioritized, thereby ensuring more informed and responsible decisions.


