-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could not run with docker-compose #12986
Comments
Please describe what error(s) you are seeing, along with what version of docker, and what operating system you are running on. |
Same issue, alloy started and then exit, error log:
env:
|
@rea1shane I have opened #13065 to track that issue, as it appears you are running a different docker compose than the one this report is on. If I am mistaken, please let me know. |
I've tried
|
I checked the logs in
|
I'm sorry to hear you are running into an issue with this. I just tried a fresh install of the docker-compose setup in the production/docker directory on Fedora 34, and did not have any problems. I suspect there's an error earlier on in the logs that may clue us in as to the issue. Can you capture the full logs for all of the docker containers and attach them here? |
production.log |
Thank you. What does your networking look like? What IPs are you using, what are your interface names? I wonder if some of the information in this issue would help? |
ip a |
By utilizing the I modified the
|
There must something else blocking me. after replace config file with yours,and replace eth0 with enp2s0, I've got this:
this is my loki.yaml auth_enabled: true
server:
http_listen_address: 0.0.0.0
grpc_listen_address: 0.0.0.0
http_listen_port: 3100
grpc_listen_port: 9095
log_level: info
common:
path_prefix: /loki
compactor_address: http://loki-backend:3100
replication_factor: 3
instance_interface_names:
- enp2s0
ring:
instance_interface_names:
- enp2s0
storage_config:
aws:
endpoint: minio:9000
insecure: true
bucketnames: loki-data
access_key_id: loki
secret_access_key: supersecret
s3forcepathstyle: true
memberlist:
join_members: ["loki-read", "loki-write", "loki-backend"]
dead_node_reclaim_time: 30s
gossip_to_dead_nodes_time: 15s
left_ingesters_timeout: 30s
bind_addr: ['0.0.0.0']
bind_port: 7946
gossip_interval: 2s
ingester:
lifecycler:
join_after: 10s
observe_period: 5s
interface_names:
- enp2s0
ring:
replication_factor: 3
kvstore:
store: memberlist
final_sleep: 0s
chunk_idle_period: 1m
wal:
enabled: true
dir: /loki/wal
max_chunk_age: 1m
chunk_retain_period: 30s
chunk_encoding: snappy
chunk_target_size: 1.572864e+06
chunk_block_size: 262144
flush_op_timeout: 10s
ruler:
enable_api: true
enable_sharding: true
wal:
dir: /loki/ruler-wal
evaluation:
mode: remote
query_frontend:
address: dns:///loki-read:9095
storage:
type: local
local:
directory: /loki/rules
ring:
instance_interface_names:
- enp2s0
rule_path: /loki/prom-rules
remote_write:
enabled: true
clients:
local:
url: http://prometheus:9090/api/v1/write
queue_config:
# send immediately as soon as a sample is generated
capacity: 1
batch_send_deadline: 0s
schema_config:
configs:
- from: 2020-08-01
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: index_
period: 24h
- from: 2023-07-11
store: tsdb
object_store: s3
schema: v12
index:
prefix: index_
period: 24h
- from: 2024-01-10
store: tsdb
object_store: s3
schema: v12
index:
prefix: index_
period: 24h
- from: 2024-03-29
store: tsdb
object_store: s3
schema: v13
index:
prefix: index_
period: 24h
limits_config:
max_cache_freshness_per_query: '10m'
reject_old_samples: true
reject_old_samples_max_age: 30m
ingestion_rate_mb: 10
ingestion_burst_size_mb: 20
# parallelize queries in 15min intervals
split_queries_by_interval: 15m
volume_enabled: true
table_manager:
retention_deletes_enabled: true
retention_period: 336h
query_range:
# make queries more cache-able by aligning them with their step intervals
align_queries_with_step: true
max_retries: 5
parallelise_shardable_queries: true
cache_results: true
frontend:
log_queries_longer_than: 5s
compress_responses: true
max_outstanding_per_tenant: 2048
instance_interface_names:
- enp2s0
query_scheduler:
max_outstanding_requests_per_tenant: 1024
scheduler_ring:
instance_interface_names:
- enp2s0
querier:
query_ingesters_within: 2h
compactor:
working_directory: /tmp/compactor
compactor_ring:
instance_interface_names:
- enp2s0 |
loki.log |
If you were to not make any Loki configuration changes, for the pods that stay up, can you shell into them to see what interfaces are available? This log appears to say that Ultimately, this feels like something specific with your Docker setup, and I'm hopeful that once the correct interface is found, you can move forward. |
Thank you. After I checked my config, {
"data-root":"/home/taoqf/docker/",
"default-address-pools": [{
"base": "172.0.0.1/16",
"size": 24
}]
} If I remove Then, how can I fix this without change my configure? |
OK, my suspicion is that the configuration you have for default address pools (thank you for finding this!) is The ring code (by default) is doing some magic to find private network interfaces. The official set of As such, my belief is that, in order to not change your docker configuration, you'll need to modify your |
OK, Thank you for your patient. I will change my docker configure. |
Describe the bug
A clear and concise description of what the bug is.
I could not run compose in
production/docker
To Reproduce
Steps to reproduce the behavior:
cd production/docker
docker compose up -d
Expected behavior
A clear and concise description of what you expected to happen.
Environment:
Screenshots, Promtail config, or terminal output
If applicable, add any output to help explain your problem.
The text was updated successfully, but these errors were encountered: