Uses official ollama testcontainers#75
Conversation
langchain4j
left a comment
There was a problem hiding this comment.
@Martin7-1 thank you! I am wondering if there is a way to make it faster? Right now it takes around 2 minutes to download the (quite small) phi model, and it happens on every test run :(
|
@langchain4j I think the best way right now is to cache docker image (like |
|
@Martin7-1 do you mean revert to the existing logic? |
|
@langchain4j Yes. What we do now is just migrating from And maybe we need to update docker image in docker hub... Looks like it updated 5 months ago and it's too old. |
|
@Martin7-1 you're right! We should probably revive https://hub.docker.com/search?q=langchain4j |
c1730ab to
dcdc081
Compare
@langchain4j Maybe they just want to manage the platform itself :). BTW, I found another Ollama image which just support CPU version: https://hub.docker.com/r/alpine/ollama. Maybe it's enough for us as I think our tests are all run on CPU (Github Actions or locally). This image is just ~70MB compared to the original Ollama image is ~4GB. You can refer to my latest test code and run it. I think is faster and maybe we do not need to push it to docker hub anymore? |
dcdc081 to
c105ecc
Compare
|
And, all Ollama models are stored at |
Sounds interesting!
This will work for local env, but not for running on github CI, right? It seems that the slowest part is downloading the model from the ollama hub. And it seems that downloading container with backed-in model from docker hub is faster? IDK, just my feeling. |
Maybe we can use actions/cache@v3 to cache
Maybe integrate them ( |
Does it actually work? In my experience, caching (e.g. maven cache) does not really work on github actions. Or maybe I am doing something wrong |
|
If you have a bit of time and want to spend it on this, I would compare download speeds from docker and from ollama hubs and calculate which option is faster :) |
Hmmm... Let me test it.
Thank you! I think I will focus on it recently as I will check whether Github Actions cache works or not. |
|
@Martin7-1 thank you so much for your help! ❤️ |
Closes langchain4j#2101