docker – Elasticsearch – Kibana – How to attach a volume while containers running

I currently have a docker compose yml file consisting of elasticsearch + kibana.

These 2 containers have been running for a while and have swelled up nicely to about 600 GB.

Problem – Legacy: Someone forgot to mount a volume under elastic.

What does this mean for us?… If the container is restarted, yml file is modified and redone etc. all data is lost that shouldn’t happen.

Question: How can I attach a volume under the currently running container without losing data?

I have tried several things, but I am not satisfied with any of them.


  1. Using docker commit
  • a. docker ps -a
  • b. docker commit <containerid> <newimagename>
  • c. docker run -ti -v <volumeName>:/usr/share/elasticsearch/data <volumeName> /bin/bash

Problem: The docker commit will make a “shadowcopy” and this action takes a long time. (10 GB = 10 minutes | 600 GB = 400-600 minutes)

  1. Using docker cp
  • a. Create a dummy container
  • b. Attach a volume under it
  • c. Start copying from the existing container to the volume.

Problem: The docker cp also takes a lot of time and the data is not saved from start to finish to the volume.


Main question and problem:
In my context, this means that from the start time of the docker cp or the docker commit (e.g. 12/12/2023 07:00 AM) to the end time (e.g. 12/12/2023 03:00 PM), the data is lost and can’t saved to the volume or inside the container.

What is the best, fastest and least data-lossy way to save data in volume?

Important: There is very little downtime, this is a PROD environment.

Of course, the last sentence raises a few questions, but let’s not get off the point we have lots of backup.

Read more here: Source link