-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: getting modules error #371
Labels
bug
Something isn't working
Comments
check your yaml files for merge conflicts. |
- How ban i get the openai api link?? llm_api_url:
- Link of the API endpoint for the LLM model
- openai: https://api.pawan.krd/cosmosrp/v1
- ollama: http://127.0.0.1:11434/
- claude: https://api.anthropic.com/v1
- gemini: no api_url
- Note: To run local Ollama, follow the guidelines here: Guide to
Ollama deployment <https://github.com/ollama/ollama>
…On Fri, 13 Sept 2024 at 14:32, ralyodio ***@***.***> wrote:
check your yaml files for merge conflicts.
—
Reply to this email directly, view it on GitHub
<#371 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AXCVTVXKQKIHBCWWDU6EFOTZWKSY5AVCNFSM6AAAAABOEUXQRSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBYGQZDANRRGQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
huh? |
Sorry how can I get that...llm api url
Openai as there's mentioned like...https://api.pawan.krd/cosmosrp/v1
…On Fri, Sep 13, 2024, 3:20 PM ralyodio ***@***.***> wrote:
huh?
—
Reply to this email directly, view it on GitHub
<#371 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AXCVTVQKR2KZ5OJ2QEZZM7LZWKYOPAVCNFSM6AAAAABOEUXQRSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBYGUZDONBWGM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
This is in my
The
...which is my openai.com api key |
That is what...how can I create my own api link..?..
Just tell me the process
…On Fri, Sep 13, 2024, 3:36 PM ralyodio ***@***.***> wrote:
This is in my config.yaml:
llm_model_type: openai
llm_model: gpt-4o
llm_api_url: https://api.pawan.krd/cosmosrp/v1
The secrets.yaml has this:
llm_api_key: sk-l7xxxxx
—
Reply to this email directly, view it on GitHub
<#371 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AXCVTVTYCKYBMPI7KZEHXU3ZWK2IZAVCNFSM6AAAAABOEUXQRSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBYGU3DCNZYG4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
just use that one. |
still im getting the same error
please help
Traceback (most recent call last):
File "/home/desktop/LinkedIn_AIHawk_automatic_job_application/main.py",
line 16, in <module>
from src.linkedIn_job_manager import LinkedInJobManager
File
"/home/Desktop/LinkedIn_AIHawk_automatic_job_application/src/linkedIn_job_manager.py",
line 8, in <module>
from inputimeout import inputimeout, TimeoutOccurred
ModuleNotFoundError: No module named 'inputimeout'
…On Fri, 13 Sept 2024 at 16:04, ralyodio ***@***.***> wrote:
just use that one.
—
Reply to this email directly, view it on GitHub
<#371 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AXCVTVT7CZ53I3HYJLC5YUTZWK5TBAVCNFSM6AAAAABOEUXQRSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBYGYYTIOJXHA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
yeah that's missing dependency just do |
im getting this error
Runtime error: Error running the bot: Education.__init__() got an
unexpected keyword argument 'education_level'
Refer to the configuration and troubleshooting guide:
https://github.com/feder-cr/LinkedIn_AIHawk_automatic_job_application/blob/main/readme.md#configuration
education_details:
- education_level: "Bachelor's Degree"
…On Fri, 13 Sept 2024 at 16:54, ralyodio ***@***.***> wrote:
yeah that's missing dependency just do pip install inputimeout
—
Reply to this email directly, view it on GitHub
<#371 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AXCVTVUVIXDWGWQKL7Y6Q2LZWLDOJAVCNFSM6AAAAABOEUXQRSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBYG4YDIMBQGE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
I think that's old. Do this:
|
But my degree is a bachelor of engineering
…On Fri, Sep 13, 2024, 5:59 PM ralyodio ***@***.***> wrote:
I think that's old. Do this:
education_details:
- degree: "Bachelor of Science Business Administration"
university: "San Diego State University"
gpa: "3.0"
graduation_year: "1998"
field_of_study: "Business Administration"
—
Reply to this email directly, view it on GitHub
<#371 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AXCVTVRWCZM6MLGDFJM2C3TZWLLBJAVCNFSM6AAAAABOEUXQRSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBYHAZTSOJUGM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
so change it obviously. that's just from my resume. |
I'm getting these many errors..
help me to get rid of this
Traceback (most recent call last):
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/src/linkedIn_easy_applier.py",
line 60, in job_apply
self.gpt_answerer.set_job(job)
File "/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/src/gpt.py",
line 147, in set_job
self.job.set_summarize_job_description(self.summarize_job_description(self.job.description))
File "/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/src/gpt.py",
line 158, in summarize_job_description
output = chain.invoke({"text": text})
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/runnables/base.py",
line 2875, in invoke
input = step.invoke(input, config)
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/runnables/base.py",
line 4441, in invoke
return self._call_with_config(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/runnables/base.py",
line 1784, in _call_with_config
context.run(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/runnables/config.py",
line 404, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/runnables/base.py",
line 4297, in _invoke
output = call_func_with_variable_args(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/runnables/config.py",
line 404, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/src/gpt.py",
line 86, in __call__
reply = self.llm(messages)
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/_api/deprecation.py",
line 168, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py",
line 976, in __call__
generation = self.generate(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py",
line 571, in generate
raise e
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py",
line 561, in generate
self._generate_with_cache(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py",
line 793, in _generate_with_cache
result = self._generate(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_openai/chat_models/base.py",
line 589, in _generate
response = self.client.create(**payload)
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_utils/_utils.py",
line 277, in wrapper
return func(*args, **kwargs)
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/resources/chat/completions.py",
line 646, in create
return self._post(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 1266, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream,
stream_cls=stream_cls))
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 942, in request
return self._request(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 1031, in _request
return self._retry_request(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 1079, in _retry_request
return self._request(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 1031, in _request
return self._retry_request(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 1079, in _retry_request
return self._request(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 1046, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You
exceeded your current quota, please check your plan and billing details.
For more information on this error, read the docs:
https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type':
'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/src/linkedIn_job_manager.py",
line 126, in apply_jobs
self.easy_applier_component.job_apply(job)
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/src/linkedIn_easy_applier.py",
line 65, in job_apply
raise Exception(f"Failed to apply to job! Original exception:
\nTraceback:\n{tb_str}")
Exception: Failed to apply to job! Original exception:
Traceback:
Traceback (most recent call last):
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/src/linkedIn_easy_applier.py",
line 60, in job_apply
self.gpt_answerer.set_job(job)
File "/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/src/gpt.py",
line 147, in set_job
self.job.set_summarize_job_description(self.summarize_job_description(self.job.description))
File "/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/src/gpt.py",
line 158, in summarize_job_description
output = chain.invoke({"text": text})
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/runnables/base.py",
line 2875, in invoke
input = step.invoke(input, config)
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/runnables/base.py",
line 4441, in invoke
return self._call_with_config(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/runnables/base.py",
line 1784, in _call_with_config
context.run(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/runnables/config.py",
line 404, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/runnables/base.py",
line 4297, in _invoke
output = call_func_with_variable_args(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/runnables/config.py",
line 404, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/src/gpt.py",
line 86, in __call__
reply = self.llm(messages)
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/_api/deprecation.py",
line 168, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py",
line 976, in __call__
generation = self.generate(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py",
line 571, in generate
raise e
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py",
line 561, in generate
self._generate_with_cache(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py",
line 793, in _generate_with_cache
result = self._generate(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/langchain_openai/chat_models/base.py",
line 589, in _generate
response = self.client.create(**payload)
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_utils/_utils.py",
line 277, in wrapper
return func(*args, **kwargs)
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/resources/chat/completions.py",
line 646, in create
return self._post(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 1266, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream,
stream_cls=stream_cls))
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 942, in request
return self._request(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 1031, in _request
return self._retry_request(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 1079, in _retry_request
return self._request(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 1031, in _request
return self._retry_request(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 1079, in _retry_request
return self._request(
File
"/home/zoheb/Desktop/linkedIn_auto_jobs_applier_with_AI/virtual/lib/python3.10/site-packages/openai/_base_client.py",
line 1046, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You
exceeded your current quota, please check your plan and billing details.
For more information on this error, read the docs:
https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type':
'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
…On Fri, 13 Sept 2024 at 19:01, ralyodio ***@***.***> wrote:
so change it obviously. that's just from my resume.
—
Reply to this email directly, view it on GitHub
<#371 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AXCVTVQUNHWPMWMAERIH5C3ZWLSKBAVCNFSM6AAAAABOEUXQRSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBYHE3TANJYGI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
LinkedIn_AIHawk_automatic_job_application/main.py", line 16, in from src.linkedIn_job_manager import LinkedInJobManager LinkedIn_AIHawk_automatic_job_application/src/linkedIn_job_manager.py", line 8, in from inputimeout import inputimeout, TimeoutOccurred # type: ignore ModuleNotFoundError: No module named 'inputimeout'
Steps to reproduce
take the latest pull
just cli the command
python3 main.py
Expected behavior
No response
Actual behavior
No response
Environment
None
Version
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: