Research libraries are seeing growing interest in artificial intelligence (AI), including generative AI. The market seems to be racing to develop various AI tools for research libraries and related sectors. Library workers express a spectrum of views about the prospect of increased AI use, whether conventional machine learning or complex large language models: how well tools might perform, what kind of algorithmic processes they use, the place of the human. Many see AI as essential. Others prefer caution, citing implicit bias, inequitable access, and unreliable outputs, aiming for long-term equity. This session will explore AI in research libraries and related sectors and consider how library adoption of AI applications may risk ethical problems. Embedded implicit bias persists in both standard machine learning applications and tools that engage complex large language models, demonstrated in inequitable outputs. Disparities may risk perpetuation of human or data-embedded propensities of distorted AI-assisted decision-making. Global challenges to AI and equity include whether more advantaged populations may have access to better quality data, technologies, or algorithmic development, creating differential AI tool quality among different populations, in turn perpetuating global or inter-community inequities. Finally, participants will explore how research libraries can act to reduce disparities and build equity in AI solutions, their inputs, and their outputs.