Retrieve content using cid from ipfs cluster

I installed the ipfs cluster by following this link
IPFS Cluster | IPFS Docs and now I am trying to retrieve a file added on the cluster nodes via CID using this script

 retrieve(CID) url = "http://localhost:8080/ipfs/" + CID try: response = requests.get(url, allow_redirects=True) if response.status_code == 200: with open("file_downloaded.txt", "wb") as file: file.write(response.content) print("File downloaded successfully!") else: print(f"Error {response.status_code} when downloading the filee.") except Exception as e: print(f"An error occurred during the request: {e}") CID = "Qmeff6BfzCcd7ekaUMTeQbMqMPvMpwrP8LPRjkwAr4XVyD" retrieve(CID) 

however the result I get is this

An error occurred during the requesta: HTTPConnectionPool(host='bafybeihstf2oihh4imkkkahxbswylzqr3xi2mrwkqlrv3h7rvsox2sgwnq.ipfs.localhost', port=8080): Max retries exceeded with url: / (Caused by NameResolutionError("<urllib3.connection.HTTPConnection object at 0x7f6a1f202bc0>: Failed to resolve 'bafybeihstf2oihh4imkkkahxbswylzqr3xi2mrwkqlrv3h7rvsox2sgwnq.ipfs.localhost' ([Errno -2] Name or service not known)"))

and I don’t know how to solve it

Your original request is converted and redirected to the “subdomain gateway” form, instead of the “path gateway”. Any chance you have enabled that? kubo/docs/config.md at master · ipfs/kubo · GitHub

Browsers know how to deal with *.ipfs.localhost I think, but I’m not sure how to make the machine deal with that directly. You can google something like “resolving localhost subdomain” and go from there, or just disable the subdomain gateway.

Also, if you have not enabled subdomain gateway at all, you may try requesting to 127.0.0.1:8080 instead of localhost:8080.

I don’t have that flag in my configuration. My configuration for my ipfs node that try to retrieve the content using CID is

{ "API": { "HTTPHeaders": {} }, "Addresses": { "API": "/ip4/127.0.0.1/tcp/5001", "Announce": [], "AppendAnnounce": [], "Gateway": "/ip4/127.0.0.1/tcp/8080", "NoAnnounce": [ ], "Swarm": [ "/ip4/0.0.0.0/tcp/4001", "/ip6/::/tcp/4001", "/ip4/0.0.0.0/udp/4001/quic-v1", "/ip4/0.0.0.0/udp/4001/quic-v1/webtransport", "/ip6/::/udp/4001/quic-v1", "/ip6/::/udp/4001/quic-v1/webtransport" ] }, "AutoNAT": {}, "Bootstrap": [ "/ip4/172.20.0.5/tcp/9096/p2p/12D3KooWJ4TbaQQvQxv7qq7PiP7pYeGysuiUDWZNFV7DWfPwBxHD", "/ip4/127.0.0.1/tcp/9096/p2p/12D3KooWJ4TbaQQvQxv7qq7PiP7pYeGysuiUDWZNFV7DWfPwBxHD", "/ip4/172.20.0.6/tcp/9096/p2p/12D3KooWHwdMs1rgykeoqSgGUCFgUafK9uUEUPCVXPHyiQZZqQpZ", "/ip4/127.0.0.1/tcp/9096/p2p/12D3KooWHwdMs1rgykeoqSgGUCFgUafK9uUEUPCVXPHyiQZZqQpZ", "/ip4/172.20.0.7/tcp/9096/p2p/12D3KooWPBBaX8XsKyiQhrCVZ289z3EQY6XEmZTQx8pjLYfvxESh", "/ip4/127.0.0.1/tcp/9096/p2p/12D3KooWPBBaX8XsKyiQhrCVZ289z3EQY6XEmZTQx8pjLYfvxESh", "/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ", "/ip4/104.131.131.82/udp/4001/quic-v1/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ", "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN", "/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa", "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb", "/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt" ], "DNS": { "Resolvers": {} }, "Datastore": { "BloomFilterSize": 0, "GCPeriod": "1h", "HashOnRead": false, "Spec": { "mounts": [ { "child": { "path": "blocks", "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2", "sync": true, "type": "flatfs" }, "mountpoint": "/blocks", "prefix": "flatfs.datastore", "type": "measure" }, { "child": { "compression": "none", "path": "datastore", "type": "levelds" }, "mountpoint": "/", "prefix": "leveldb.datastore", "type": "measure" } ], "type": "mount" }, "StorageGCWatermark": 90, "StorageMax": "10GB" }, "Discovery": { "MDNS": { "Enabled": true } }, "Experimental": { "FilestoreEnabled": false, "Libp2pStreamMounting": true, "OptimisticProvide": false, "OptimisticProvideJobsPoolSize": 0, "P2pHttpProxy": false, "StrategicProviding": false, "UrlstoreEnabled": false }, "Gateway": { "DeserializedResponses": null, "DisableHTMLErrors": null, "ExposeRoutingAPI": null, "HTTPHeaders": {}, "NoDNSLink": false, "NoFetch": false, "PublicGateways": null, "RootRedirect": "" }, "Identity": { "PeerID": "12D3KooWGKbHEiqZc1uxGQgsh3ABqtD6ccvYv1mwAbcWMe3JmxBS", "PrivKey": "CAESQEpZA+8FIs+ntKpFko5w4nJnOS/SQLhNAynx9W3kXc+6YKLKbmraPW1qYVenFYwnRpyF3HkdIKQY6yhhts4QEYk=" }, "Internal": {}, "Ipns": { "RecordLifetime": "", "RepublishPeriod": "", "ResolveCacheSize": 128 }, "Migration": { "DownloadSources": [], "Keep": "" }, "Mounts": { "FuseAllowOther": false, "IPFS": "/ipfs", "IPNS": "/ipns" }, "Peering": { "Peers": null }, "Pinning": { "RemoteServices": {} }, "Plugins": { "Plugins": null }, "Provider": { "Strategy": "" }, "Pubsub": { "DisableSigning": false, "Router": "" }, "Reprovider": {}, "Routing": { "AcceleratedDHTClient": false, "Methods": null, "Routers": null }, "Swarm": { "AddrFilters": null, "ConnMgr": {}, "DisableBandwidthMetrics": false, "DisableNatPortMap": false, "RelayClient": {}, "RelayService": {}, "ResourceMgr": {}, "Transports": { "Multiplexers": {}, "Network": {}, "Security": {} } } } 

while the docker-compose.yml for the cluster is version: ‘3.4’

# This is an example docker-compose file to quickly test an IPFS Cluster # with multiple peers on a contained environment. # It runs 3 cluster peers (cluster0, cluster1...) attached to kubo daemons # (ipfs0, ipfs1...) using the CRDT consensus component. Cluster peers # autodiscover themselves using mDNS on the docker internal network. # # To interact with the cluster use "ipfs-cluster-ctl" (the cluster0 API port is # exposed to the locahost. You can also "docker exec -ti cluster0 sh" and run # it from the container. "ipfs-cluster-ctl peers ls" should show all 3 peers a few # seconds after start. # # For persistence, a "compose" folder is created and used to store configurations # and states. This can be used to edit configurations in subsequent runs. It looks # as follows: # # compose/ # |-- cluster0 # |-- cluster1 # |-- ... # |-- ipfs0 # |-- ipfs1 # |-- ... # # During the first start, default configurations are created for all peers. services: ################################################################################## ## Cluster PEER 0 ################################################################ ################################################################################## ipfs0: container_name: ipfs0 image: ipfs/kubo:release ports: - "4001:4001" # ipfs swarm - expose if needed/wanted - "5001:5001" # ipfs api - expose if needed/wanted - "8080:8080" # ipfs gateway - expose if needed/wanted volumes: - ./compose/ipfs0:/data/ipfs cluster0: container_name: cluster0 image: ipfs/ipfs-cluster:latest depends_on: - ipfs0 environment: CLUSTER_PEERNAME: cluster0 CLUSTER_SECRET: ${CLUSTER_SECRET} # From shell variable if set CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs0/tcp/5001 CLUSTER_CRDT_TRUSTEDPEERS: '*' # Trust all peers in Cluster CLUSTER_RESTAPI_HTTPLISTENMULTIADDRESS: /ip4/0.0.0.0/tcp/9094 # Expose API CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery ports: # Open API port (allows ipfs-cluster-ctl usage on host) - "127.0.0.1:9094:9094" # The cluster swarm port would need to be exposed if this container # was to connect to cluster peers on other hosts. # But this is just a testing cluster. # - "9095:9095" # Cluster IPFS Proxy endpoint # - "9096:9096" # Cluster swarm endpoint volumes: - ./compose/cluster0:/data/ipfs-cluster ################################################################################## ## Cluster PEER 1 ################################################################ ################################################################################## # See Cluster PEER 0 for comments (all removed here and below) ipfs1: container_name: ipfs1 image: ipfs/kubo:release volumes: - ./compose/ipfs1:/data/ipfs cluster1: container_name: cluster1 image: ipfs/ipfs-cluster:latest depends_on: - ipfs1 environment: CLUSTER_PEERNAME: cluster1 CLUSTER_SECRET: ${CLUSTER_SECRET} CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs1/tcp/5001 CLUSTER_CRDT_TRUSTEDPEERS: '*' CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery volumes: - ./compose/cluster1:/data/ipfs-cluster ################################################################################## ## Cluster PEER 2 ################################################################ ################################################################################## # See Cluster PEER 0 for comments (all removed here and below) ipfs2: container_name: ipfs2 image: ipfs/kubo:release volumes: - ./compose/ipfs2:/data/ipfs cluster2: container_name: cluster2 image: ipfs/ipfs-cluster:latest depends_on: - ipfs2 environment: CLUSTER_PEERNAME: cluster2 CLUSTER_SECRET: ${CLUSTER_SECRET} CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs2/tcp/5001 CLUSTER_CRDT_TRUSTEDPEERS: '*' CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery volumes: - ./compose/cluster2:/data/ipfs-cluster # For adding more peers, copy PEER 1 and rename things to ipfs2, cluster2. # Keep bootstrapping to cluster0. 

Have you tried this?