Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mediate token limits of LLM service providers #36

Open
1 task done
philippzagar opened this issue Oct 20, 2023 · 0 comments
Open
1 task done

Mediate token limits of LLM service providers #36

philippzagar opened this issue Oct 20, 2023 · 0 comments
Labels
enhancement New feature or request

Comments

@philippzagar
Copy link
Contributor

philippzagar commented Oct 20, 2023

Problem

As shown in #33, one hits the token limits of a single prompt to the OpenAI LLM provider quite quickly in the Multiple FHIR Resource Chat functionality of LLMonFHIR.

Solution

#33 provides an easy and quick fix for this limitation, however, we should investigate how to properly summarize FHIR resource data points.
However, these limitations are quite hard to explore/reproduce as we don't have the necessary extensive FHIR data points available (except for @aalami5).

Ideas to explore:

  • Experiment with OpenAI prompt that summarizes FHIR resources
  • Give the summarization output a more fixed (JSON?) structure, not just free-flowing text
  • Omit unnecessary FHIR data points (such as identifiers, ...)

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines
@philippzagar philippzagar added the enhancement New feature or request label Oct 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant