Once again, technology is changing the landscape of academic research at a rapid pace. There are dozens of new AI Research Assistants (AI-RA) that will do things like generate a synthesis matrix in seconds or visualise the connections between studies, authors, and fields allowing us to discover information in a new way. We also have the ever-popular Large Language Models, which will mimic our writing, art, code, and other forms of work. And that’s just the tip of the iceberg, folks.
In the Library, we are currently playing whack-a-mole (whack-AI-mole?) with various subscriptions. One provider of a substantial amount of our content added an AI-RA without notice or our consent. Another major provider has their own AI in the works as well. The provider of the general search on our website has also launched an AI-RA that will retrieve and summarize 5 articles on your topic. Sorry if we’re being a bit vague, but we see a need to proceed with some caution in this post, so we aren’t going to name names, but there are more AI-RAs on the horizon. As these pop-up, we rush to test them, and so far, we’ve disabled most of them – if they let us.
Why are we disabling these AI-RAs? We have a range of reasons, such as your privacy; poor search results; varied output quality; vendors being unwilling to commit to AI tools remaining free; academic integrity; ethical concerns such as bias and data privacy; and the unknown impact these will have on students and instructors.
Privacy is actually a pretty big deal in libraries: “the right to read, consider, and develop ideas and beliefs free from observation or unwanted surveillance by the government or others – is the bedrock foundation for intellectual freedom.” Within our collections, even though you have to login sometimes, what you personally search, read, or download isn’t information that we can see. For physical materials, we can’t tell you or anyone else what you’ve borrowed before; that information is purged from the system once items are returned. We also have responsibilities as per the Freedom of Information and Protection of Privacy Act, so we are proceeding with caution in that regard, too.
When it comes to quality, the searches and search results that these AI-RAs currently produce cause us concern. These are not searches that any of your Librarians would have made ourselves or guided a student to create. What we strive to teach students is how to construct a precise search; we use phrases, boolean, wildcards, subject headings, and more to find the needles within the haystack that relate to the question at hand. In the screenshot below, one of the AI-RAs is searching for 11 sentences simultaneously (as they are all attached with OR) and combining an extensive number of search results into one list. It isn’t able to formulate a precise search, so instead it uses everyday language to cast as wide a net as possible. In practice, this means that users would be relying on the relevancy algorithm to bring appropriate sources to the top of the results list – if the user doesn’t stop at the 5 articles that the AI-RA recommends and summarizes (if we can even call them summaries).
Librarians are not alone in noticing that AI mimics our work in ways that we wouldn’t think to, such as these AI designed computer chips that were in the news. Three questions (not exhaustive) that we need to answer here are:
1) Is this an effective way to search?
2) Which search results are recommended and why?
3) How might AI influence students’ information seeking behaviours?
The latter concern was especially apparent to us in testing another surprise AI-RA release. The search terms that their tool generated for our test queries (all commonly seen student research questions) were nonsensical. In the screenshot below, context, influences, individuals, and body image are being searched for synonymously; keywords separated by OR should represent identical, similar, or at least comparable concepts:
In our view, these AI-RAs are not ready for prime time, but we do anticipate that they will improve. Implicit in these flawed feature releases is that the labour and investment essential to making those improvements is expected to come from you and your students, working in real time, with real stakes. We don’t think that this is appropriate, so we are pushing back.
When these AI-RAs are ready, will they still be free? So far, vendors we’ve spoken to are unwilling to commit to that. If the VIU community becomes accustomed to using these tools, but we can no longer afford them, what happens then? As Library subscriptions come up for renewal, we are pursuing these assurances in our negotiations. For over a decade, our collections budget has been status quo while prices rise, which has already left us with no choice but to cancel subscriptions to cover inflation and Canadian dollar fluctuations. The Library faces budgetary pressures like the rest of the university. Thankfully, the terms and cost for many of our subscriptions are negotiated collectively with other libraries to increase our bargaining power, and we will continue to work alongside them to achieve fair value, terms, and privacy measures in relation to these emerging features and pressures.
While negotiating terms and costs for these tools seems wise, we need to know where we are going as a university. When we assess that these tools add value and are ready for release, we don’t want to catch folks by surprise; librarians see a wide-range of assignments as we support students’ research, and we can’t help but notice that many syllabi and assignments regulate or forbid the use of AI to varying degrees. We’re also hearing from instructional colleagues that matters of academic integrity are increasing in their workloads.
What can we do, as a University, to set up our students (and ourselves) for success amid such rapid technological change? As a proactive measure, some institutions have created asynchronous AI Literacy tutorials, and early studies are showing positive results (Kong et al., 2024; Ngo & Hastie, 2025). To that end, the Library is planning to create such a thing, and we’d love to have other educators contribute to its development. We have an opportunity to grow AI, and approaches to it, on campus in a way that aligns with our values, purposes, and curriculum. We’re hopeful about what we might achieve in creating these tutorials as a grassroots, interdisciplinary initiative.
This blog post is under a CC BY-NC-SA 4.0 License.