tidyllm is an R package designed to access various large language model APIs, including Anthropic Claude, OpenAI,Google Gemini, Perplexity,Groq, Mistral, and local models via Ollama or OpenAI-compatible APIs. Built for simplicity and functionality, it helps you generate text, analyze media, and integrate model feedback into your data workflows with ease.
To install tidyllm from CRAN, use:
install.packages("tidyllm")
Or for the development version from GitHub:
# Install devtools if not already installed
if (!requireNamespace("devtools", quietly = TRUE)) {
install.packages("devtools")
}::install_github("edubruell/tidyllm") devtools
Here’s a quick example using tidyllm to describe an image using the Claude model to and follow up with local open-source models:
library("tidyllm")
# Describe an image with claude
<- llm_message("Describe this image",
conversation .imagefile = here("image.png")) |>
chat(claude())
# Use the description to query further with groq
|>
conversation llm_message("Based on the previous description,
what could the research in the figure be about?") |>
chat(ollama(.model = "gemma2"))
For more examples and advanced usage, check the Get Started vignette.
Please note: To use tidyllm, you need either an installation of ollama or an active API key for one of the supported providers (e.g., Claude, ChatGPT). See the Get Started vignette for setup instructions.
The development version 0.2.3. of tidyllm,
introduces a major interface change to provide a more intuitive user
experience. Previously, provider-specific functions like
claude()
, openai()
, and others were directly
used for chat-based workflows. They specified both an API-provider and
performed a chat-interaction. Now, these functions primarily serve as
provider configuration for more general verbs like
chat()
,embed()
or send_batch()
. A
combination of a general verb and a provider will always route requests
to a provider-specific function like openai_chat()
. Read
the Changelog or
the package
vignette for more information.
For backward compatibility, the old use of functions like
openai()
or claude()
directly for chat
requests still works but now but issues deprecation warnings. It is
recommended to either use the verb-based interface:
llm_message("Hallo") |> chat(openai(.model="gpt-4o"))
or to use the more verbose provider-specific functions directly:
llm_message("Hallo") |> openai_chat(.model="gpt-4o")
For detailed instructions and advanced features, see:
The are some similar R packages for working with LLMs:
copy
, create
, and
delete
, which are not currently available in
tidyllm. These features make rollama
particularly suited for workflows requiring model management or
deployment within the Ollama ecosystem.We welcome contributions! Feel free to open issues or submit pull requests on GitHub.
This project is licensed under the MIT License - see the LICENSE file for details.