You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This release is an important step toward solving the DHT bottleneck for self-hosting IPFS on consumer hardware and home networks. The DHT sweep provider (now default) announces your content to the network without traffic spikes that overwhelm residential connections. Automatic UPnP recovery means your node stays reachable after router restarts without manual intervention.
New content becomes findable immediately after ipfs add. The provider system persists state across restarts, alerts you when falling behind, and exposes detailed stats for monitoring. This release also finalizes the deprecation of the legacy go-ipfs name.
🔦 Highlights
🎯 DHT Sweep provider is now the default
The Amino DHT Sweep provider system, introduced as experimental in v0.38, is now enabled by default (Provide.DHT.SweepEnabled=true).
What this means: All nodes now benefit from efficient keyspace-sweeping content announcements that reduce memory overhead and create predictable network patterns, especially for nodes providing large content collections.
Migration: The transition is automatic on upgrade. Your existing configuration is preserved:
If you explicitly set Provide.DHT.SweepEnabled=false in v0.38, you'll continue using the legacy provider
If you were using the default settings, you'll automatically get the sweep provider
To opt out and return to legacy behavior: ipfs config --json Provide.DHT.SweepEnabled false
Providers with medium to large datasets may need to adjust defaults; see Capacity Planning
When Routing.AcceleratedDHTClient is enabled, full sweep efficiency may not be available yet; consider disabling the accelerated client as sweep is sufficient for most workloads. See caveat 4.
New features available with sweep mode:
Detailed statistics via ipfs provide stat (see below)
Automatic resume after restarts with persistent state (see below)
Proactive alerts when reproviding falls behind (see below)
Better metrics for monitoring (provider_provides_total) (see below)
Fast optimistic provide of new root CIDs (see below)
⚡ Fast root CID providing for immediate content discovery
When you add content to IPFS, the sweep provider queues it for efficient DHT provides over time. While this is resource-efficient, other peers won't find your content immediately after ipfs add or ipfs dag import completes.
To make sharing faster, ipfs add and ipfs dag import now do an immediate provide of root CIDs to the DHT in addition to the regular queue (controlled by the new --fast-provide-root flag, enabled by default). This complements the sweep provider system: fast-provide handles the urgent case (root CIDs that users share and reference), while the sweep provider efficiently provides all blocks according to Provide.Strategy over time.
This closes the gap between command completion and content shareability: root CIDs typically become discoverable on the network in under a second (compared to 30+ seconds previously). The feature uses optimistic DHT operations, which are significantly faster with the sweep provider (now enabled by default).
By default, this immediate provide runs in the background without blocking the command. For use cases requiring guaranteed discoverability before the command returns (e.g., sharing a link immediately), use --fast-provide-wait to block until the provide completes.
Simple examples:
ipfs add file.txt # Root provided immediately, blocks queued for sweep provider
ipfs add file.txt --fast-provide-wait # Wait for root provide to complete
ipfs dag import file.car # Same for CAR imports
Configuration: Set defaults via Import.FastProvideRoot (default: true) and Import.FastProvideWait (default: false). See ipfs add --help and ipfs dag import --help for more details and examples.
Fast root CID provide is automatically skipped when DHT routing is unavailable (e.g., Routing.Type=none or delegated-only configurations).
⏯️ Provider state persists across restarts
The Sweep provider now persists the reprovide cycle state and automatically resumes where it left off after a restart. This brings several improvements:
Persistent progress: The provider saves its position in the reprovide cycle to the datastore. On restart, it continues from where it stopped instead of starting from scratch.
Catch-up reproviding: If the node was offline for an extended period, all CIDs that haven't been reprovided within the configured reprovide interval are immediately queued for reproviding when the node starts up. This ensures content availability is maintained even after downtime.
Persistent provide queue: The provide queue is persisted to the datastore on shutdown. When the node restarts, queued CIDs are restored and provided as expected, preventing loss of pending provide operations.
Resume control: The resume behavior is controlled via Provide.DHT.ResumeEnabled (default: true). Set to false if you don't want to keep the persisted provider state from a previous run.
This feature improves reliability for nodes that experience intermittent connectivity or restarts.
📊 Detailed statistics with ipfs provide stat
The Sweep provider system now exposes detailed statistics through ipfs provide stat, helping you monitor provider health and troubleshoot issues.
Run ipfs provide stat for a quick summary, or use --all to see complete metrics including connectivity status, queue sizes, reprovide schedules, network statistics, operation rates, and worker utilization. For real-time monitoring, use watch ipfs provide stat --all --compact to observe changes in a 2-column layout. Individual sections can be displayed with flags like --network, --operations, or --workers.
For Dual DHT configurations, use --lan to view LAN DHT statistics instead of the default WAN DHT stats.
Legacy provider (when Provide.DHT.SweepEnabled=false) shows basic statistics without flag support.
🔔 Slow reprovide warnings
Kubo now monitors DHT reprovide operations when Provide.DHT.SweepEnabled=true
and alerts you if your node is falling behind on reprovides.
When the reprovide queue consistently grows and all periodic workers are busy,
a warning displays with:
Queue size and worker utilization details
Recommended solutions: increase Provide.DHT.MaxWorkers or Provide.DHT.DedicatedPeriodicWorkers
Command to monitor real-time progress: watch ipfs provide stat --all --compact
The alert polls every 15 minutes (to avoid alert fatigue while catching
persistent issues) and only triggers after sustained growth across multiple
intervals. The legacy provider is unaffected by this change.
📊 Metric rename: provider_provides_total
The Amino DHT Sweep provider metric has been renamed from total_provide_count_total to provider_provides_total to follow OpenTelemetry naming conventions and maintain consistency with other kad-dht metrics (which use dot notation like rpc.inbound.messages, rpc.outbound.requests, etc.).
Kubo 0.38 simplifies content announcement configuration, introduces an experimental sweeping DHT provider for efficient large-scale operations, and includes various performance improvements.
v0.38.1 includes fixes for migrations on Windows and Pebble datastore – if you are using either, make sure to use .1 release.
🔦 Highlights
🚀 Repository migration: simplified provide configuration
This release migrates the repository from version 17 to version 18, simplifying how you configure content announcements.
The old Provider and Reprovider sections are now combined into a single Provide section. Your existing settings are automatically migrated - no manual changes needed.
Migration happens automatically when you run ipfs daemon --migrate. For manual migration: ipfs repo migrate --to=18.
How it works: Instead of providing keys one-by-one, the sweep provider systematically explores DHT keyspace regions in batches.
The diagram shows how sweep mode avoids the hourly traffic spikes of Accelerated DHT while maintaining similar effectiveness. By grouping CIDs into keyspace regions and processing them in batches, sweep mode reduces memory overhead and creates predictable network patterns.
Benefits for large-scale operations: Handles hundreds of thousands of CIDs with reduced memory and network connections, spreads operations evenly to eliminate resource spikes, maintains state across restarts through persistent keystore, and provides better metrics visibility.
Monitoring and debugging: Legacy mode (SweepEnabled=false) tracks provider_reprovider_provide_count and provider_reprovider_reprovide_count, while sweep mode (SweepEnabled=true) tracks total_provide_count_total. Enable debug logging with GOLOG_LOG_LEVEL=error,provider=debug,dht/provider=debug to see detailed logs from either system.
Note
This feature is experimental and opt-in. In the future, it will become the default and replace the legacy system. Some commands like ipfs stats provide and ipfs routing provide are not yet available with sweep mode. Run ipfs provide --help for alternatives.
Kubo now exposes DHT metrics from go-libp2p-kad-dht, including total_provide_count_total for sweep provider operations and RPC metrics prefixed with rpc_inbound_ and rpc_outbound_ for DHT message traffic. See Kubo metrics documentation for details.
🚨 Improved gateway error pages with diagnostic tools
Gateway error pages now provide more actionable information during content retrieval failures. When a 504 Gateway Timeout occurs, users see detailed retrieval state information including which phase failed and a sample of providers that were attempted:
Gateway.DiagnosticServiceURL (default: https://check.ipfs.network): Configures the diagnostic service URL. When set, 504 errors show a "Check CID retrievability" button that links to this service with ?cid=<failed-cid> for external diagnostics. Set to empty string to disable.
Enhanced error details: Timeout errors now display the retrieval phase where failure occurred (e.g., "connecting to providers", "fetching data") and up to 3 peer IDs that were attempted but couldn't deliver the content, making it easier to diagnose network or provider issues.
Retry button on all error pages: Every gateway error page now includes a retry button for quick page refresh without manual URL re-entry.
🎨 Updated WebUI
The Web UI has been updated to v4.9 with a new Diagnostics screen for troubleshooting and system monitoring. Access it at http://127.0.0.1:5001/webui when running your local IPFS node.
Diagnostics: Logs
Files: Check Retrieval
Diagnostics: Retrieval Results
Debug issues in real-time by adjusting log level without restart (global or per-subsystem like bitswap)
Check if content is available to other peers directly from Files screen
Find out why content won't load or who is providing it to the network
Peers: Agent Versions
Files: Custom Sorting
Know what software peers run
Find files faster with new sorting
Additional improvements include a close button in the file viewer, better error handling, and fixed navigation highlighting.
📌 Pin name improvements
ipfs pin ls <cid> --names now correctly returns pin names for specific CIDs (#10649, boxo#1035), RPC no longer incorrectly returns names from other pins (#10966), and pin names are now limited to 255 bytes for better cross-platform compatibility (#10981).
🛠️ Identity CID size enforcement and ipfs files write fixes
Identity CID size limits are now enforced
Identity CIDs use multihash 0x00 to embed data directly in the CID without hashing. This experimental optimization was designed for tiny data where a CID reference would be larger than the data itself, but without size limits it was easy to misuse and could turn into an anti-pattern that wastes resources and enables abuse. This release enforces a maximum of 128 bytes for identity CIDs - attempting to exceed this limit will return a clear error message.
ipfs add --inline-limit and --hash=identity now enforce the 128-byte maximum (error when exceeded)
ipfs files write prevents creation of oversized identity CIDs
Multiple ipfs files write bugs have been fixed
This release resolves several long-standing MFS issues: raw nodes now preserve their codec instead of being forced to dag-pb, append operations on raw nodes work correctly by converting to UnixFS when needed, and identity CIDs properly inherit the full CID prefix from parent directories.
📤 Provide Filestore and Urlstore blocks on write
Improvements to the providing system in the last release (provide blocks according to the configured Strategy) left out Filestore and [Urlstore](https:/...
Kubo 0.38.0 simplifies content announcement configuration, introduces an experimental sweeping DHT provider for efficient large-scale operations, and includes various performance improvements.
🔦 Highlights
🚀 Repository migration: simplified provide configuration
This release migrates the repository from version 17 to version 18, simplifying how you configure content announcements.
The old Provider and Reprovider sections are now combined into a single Provide section. Your existing settings are automatically migrated - no manual changes needed.
Migration happens automatically when you run ipfs daemon --migrate. For manual migration: ipfs repo migrate --to=18.
How it works: Instead of providing keys one-by-one, the sweep provider systematically explores DHT keyspace regions in batches.
The diagram shows how sweep mode avoids the hourly traffic spikes of Accelerated DHT while maintaining similar effectiveness. By grouping CIDs into keyspace regions and processing them in batches, sweep mode reduces memory overhead and creates predictable network patterns.
Benefits for large-scale operations: Handles hundreds of thousands of CIDs with reduced memory and network connections, spreads operations evenly to eliminate resource spikes, maintains state across restarts through persistent keystore, and provides better metrics visibility.
Monitoring and debugging: Legacy mode (SweepEnabled=false) tracks provider_reprovider_provide_count and provider_reprovider_reprovide_count, while sweep mode (SweepEnabled=true) tracks total_provide_count_total. Enable debug logging with GOLOG_LOG_LEVEL=error,provider=debug,dht/provider=debug to see detailed logs from either system.
Note
This feature is experimental and opt-in. In the future, it will become the default and replace the legacy system. Some commands like ipfs stats provide and ipfs routing provide are not yet available with sweep mode. Run ipfs provide --help for alternatives.
Kubo now exposes DHT metrics from go-libp2p-kad-dht, including total_provide_count_total for sweep provider operations and RPC metrics prefixed with rpc_inbound_ and rpc_outbound_ for DHT message traffic. See Kubo metrics documentation for details.
🚨 Improved gateway error pages with diagnostic tools
Gateway error pages now provide more actionable information during content retrieval failures. When a 504 Gateway Timeout occurs, users see detailed retrieval state information including which phase failed and a sample of providers that were attempted:
Gateway.DiagnosticServiceURL (default: https://check.ipfs.network): Configures the diagnostic service URL. When set, 504 errors show a "Check CID retrievability" button that links to this service with ?cid=<failed-cid> for external diagnostics. Set to empty string to disable.
Enhanced error details: Timeout errors now display the retrieval phase where failure occurred (e.g., "connecting to providers", "fetching data") and up to 3 peer IDs that were attempted but couldn't deliver the content, making it easier to diagnose network or provider issues.
Retry button on all error pages: Every gateway error page now includes a retry button for quick page refresh without manual URL re-entry.
🎨 Updated WebUI
The Web UI has been updated to v4.9 with a new Diagnostics screen for troubleshooting and system monitoring. Access it at http://127.0.0.1:5001/webui when running your local IPFS node.
Diagnostics: Logs
Files: Check Retrieval
Diagnostics: Retrieval Results
Debug issues in real-time by adjusting log level without restart (global or per-subsystem like bitswap)
Check if content is available to other peers directly from Files screen
Find out why content won't load or who is providing it to the network
Peers: Agent Versions
Files: Custom Sorting
Know what software peers run
Find files faster with new sorting
Additional improvements include a close button in the file viewer, better error handling, and fixed navigation highlighting.
📌 Pin name improvements
ipfs pin ls <cid> --names now correctly returns pin names for specific CIDs (#10649, boxo#1035), RPC no longer incorrectly returns names from other pins (#10966), and pin names are now limited to 255 bytes for better cross-platform compatibility (#10981).
🛠️ Identity CID size enforcement and ipfs files write fixes
Identity CID size limits are now enforced
Identity CIDs use multihash 0x00 to embed data directly in the CID without hashing. This experimental optimization was designed for tiny data where a CID reference would be larger than the data itself, but without size limits it was easy to misuse and could turn into an anti-pattern that wastes resources and enables abuse. This release enforces a maximum of 128 bytes for identity CIDs - attempting to exceed this limit will return a clear error message.
ipfs add --inline-limit and --hash=identity now enforce the 128-byte maximum (error when exceeded)
ipfs files write prevents creation of oversized identity CIDs
Multiple ipfs files write bugs have been fixed
This release resolves several long-standing MFS issues: raw nodes now preserve their codec instead of being forced to dag-pb, append operations on raw nodes work correctly by converting to UnixFS when needed, and identity CIDs properly inherit the full CID prefix from parent directories.
📤 Provide Filestore and Urlstore blocks on write
Improvements to the providing system in the last release (provide blocks according to the configured [Strategy](https://github.c...
Kubo 0.37.0 introduces embedded repository migrations, gateway resource protection, complete AutoConf control, improved reprovider strategies, and anonymous telemetry for better feature prioritization. This release significantly improves memory efficiency, network configuration flexibility, and operational reliability while maintaining full backward compatibility.
🔦 Highlights
🚀 Repository migration from v16 to v17 with embedded tooling
This release migrates the Kubo repository from version 16 to version 17. Migrations are now built directly into the binary - completing in milliseconds without internet access or external downloads.
ipfs daemon --migrate performs migrations automatically. Manual migration: ipfs repo migrate --to=17 (or --to=16 --allow-downgrade for compatibility). Embedded migrations apply to v17+; older versions still require external tools.
Legacy migration deprecation: Support for legacy migrations that download binaries from the internet will be removed in a future version. Only embedded migrations for the last 3 releases will be supported. Users with very old repositories should update in stages rather than skipping multiple versions.
🚦 Gateway concurrent request limits and retrieval timeouts
New configurable limits protect gateway resources during high load:
Gateway.RetrievalTimeout (default: 30s): Maximum duration for content retrieval. Returns 504 Gateway Timeout when exceeded - applies to both initial retrieval (time to first byte) and between subsequent writes.
Gateway.MaxConcurrentRequests (default: 4096): Limits concurrent HTTP requests. Returns 429 Too Many Requests when exceeded. Protects nodes from traffic spikes and resource exhaustion, especially useful behind reverse proxies without rate-limiting.
New Prometheus metrics for monitoring:
ipfs_http_gw_concurrent_requests: Current requests being processed
ipfs_http_gw_responses_total: HTTP responses by status code
ipfs_http_gw_retrieval_timeouts_total: Timeouts by status code and truncation status
Tuning tips:
Monitor metrics to understand gateway behavior and adjust based on observations
Watch ipfs_http_gw_concurrent_requests for saturation
Track ipfs_http_gw_retrieval_timeouts_total vs success rates to identify timeout patterns indicating routing or storage provider issues
🔧 AutoConf: Complete control over network defaults
Configuration fields now support ["auto"] placeholders that resolve to network defaults from AutoConf.URL. These defaults can be inspected, replaced with custom values, or disabled entirely. Previously, empty configuration fields like Routing.DelegatedRouters: [] would use hardcoded defaults - this system makes those defaults explicit through "auto" values. When upgrading to Kubo 0.37, custom configurations remain unchanged.
New --expand-auto flag shows resolved values for any config field:
ipfs config show --expand-auto # View all resolved endpoints
ipfs config Bootstrap --expand-auto # Check specific values
ipfs config Routing.DelegatedRouters --expand-auto
ipfs config DNS.Resolvers --expand-auto
Configuration can be managed via:
Replace "auto" with custom endpoints or set [] to disable features
Switch modes with --profile=autoconf-on|autoconf-off
Configure via AutoConf.Enabled and custom manifests via AutoConf.URL
# Enable automatic configuration
ipfs config profiles apply autoconf-on
# Or manually set specific fields
ipfs config Bootstrap '["auto"]'
ipfs config --json DNS.Resolvers '{".": ["https://dns.example.com/dns-query"], "eth.": ["auto"]}'
🗑️ Clear provide queue when reprovide strategy changes
Changing Reprovider.Strategy and restarting Kubo now automatically clears the provide queue. Only content matching the new strategy will be announced.
Manual queue clearing is also available:
ipfs provide clear - clear all queued content announcements
Note
Upgrading to Kubo 0.37 will automatically clear any preexisting provide queue. The next time Reprovider.Interval hits, Reprovider.Strategy will be executed on a clean slate, ensuring consistent behavior with your current configuration.
🪵 Revamped ipfs log level command
The ipfs log level command has been completely revamped to support both getting and setting log levels with a unified interface.
New: Getting log levels
ipfs log level - Shows default level only
ipfs log level all - Shows log level for every subsystem, including default level
ipfs log level foo - Shows log level for a specific subsystem only
Kubo RPC API: POST /api/v0/log/level?arg=<subsystem>
ipfs log level all info - Sets all subsystems to "info" level (convenient, no escaping)
ipfs log level '*' info - Equivalent to above but requires shell escaping
ipfs log level foo default - Sets "foo" subsystem to current default level
The command now provides full visibility into your current logging configuration while maintaining full backward compatibility. Both all and * work for specifying all subsystems, with all being more convenient since it doesn't require shell escaping.
🧷 Named pins in ipfs add command
Added --pin-name flag to ipfs add for assigning names to pins.
Added support for controlling IPNS record publishing strategies with new command flags and configuration.
New command flags:
# Publish without network connectivity (local datastore only)
ipfs name publish --allow-offline /ipfs/QmHash
# Publish without DHT connectivity (uses local datastore and HTTP delegated publishers)
ipfs name publish --allow-delegated /ipfs/QmHash
Delegated publishers configuration:
Ipns.DelegatedPublishers configures HTTP endpoints for IPNS publishing. Supports "auto" for network defaults or custom HTTP endpoints. The --allow-delegated flag enables publishing through these endpoints without requiring DHT connectivity, useful for nodes behind restrictive networks or during testing.
🔢 Custom sequence numbers in ipfs name publish
Added --sequence flag to ipfs name publish for setting custom sequence numbers in IPNS records. This enables advanced use cases like manually coordinating updates across multiple nodes. See ipfs name publish --help for details.
⚙️ Reprovider.Strategy is now consistently respected
Prior to this version, files added, blocks received etc. were "provided" to the network (announced on the DHT) regardless of the "reproviding strategy" setting. For example:
This release promotes the HTTP Retrieval client from an experimental feature to a standard feature that is enabled by default. When possible, Kubo will retrieve blocks over plain HTTPS (HTTP/2) without any extra user configuration.
The Bitswap client now supports broadcast reduction logic, which is enabled by default. This feature significantly reduces the number of broadcast messages sent to peers, resulting in lower bandwidth usage during load spikes.
The overall logic works by sending to non-local peers only if those peers have previously replied that they want data blocks. To minimize impact on existing workloads, by default, broadcasts are still always sent to peers on the local network, or the ones defined in Peering.Peers.
At Shipyard, we conducted A/B testing on our internal Kubo staging gateway with organic CID requests to ipfs.io. While these results may not exactly match your specific workload, the benefits proved significant enough to make this feature default. Here are the key findings:
Dramatic Resource Usage Reduction: Internal testing demonstrated a reduction in Bitswap broadcast messages by 80-98% and network bandwidth savings of 50-95%, with the greatest improvements occurring during high traffic and peer spikes. These efficiency gains lower operational costs of running Kubo under high load and improve the IPFS Mainnet (which is >80% Kubo-based) by reducing ambient traffic for all connected peers.
Improved Memory Stability: Memory stays stable even during major CID request spikes that increase peer count, preventing the out-of-memory (OOM) issues found in earlier Kubo versions.
Data Retrieval Performance Remains Strong: Our tests suggest that Kubo gateway hosts with broadcast reduction enabled achieve similar or better HTTP 200 success rates compared to version 0.35, while maintaining equivalent or higher want-have responses and unique blocks received.
For more information about our A/B tests, see kubo#10825.
To revert to the previous behavior for your own A/B testing, set Internal.Bitswap.BroadcastControl.Enable to false and monitor relevant metrics (ipfs_bitswap_bcast_skips_total, ipfs_bitswap_haves_received, ipfs_bitswap_unique_blocks_received, ipfs_bitswap_wanthaves_broadcast, HTTP 200 success rate).
go-log v2 has been out for quite a while now and it's time to deprecate v1.
Replace all use of go-log with go-log/v2
Makes /api/v0/log/tail useful over HTTP
Fixes ipfs log tail
Removes support for ContextWithLoggable as this is not needed for tracing-like functionality
Kubo now uses AutoNATv2 as a client
This Kubo release starts utilizing AutoNATv2 client functionality. go-libp2p v0.42 supports and depends on both AutoNATv1 and v2, and Autorelay feature continues to use v1. go-libp2p v0.43+ will discontinue internal use of AutoNATv1. We will maintain support for both v1 and v2 until then, though v1 will gradually be deprecated and ultimately removed.
Smarter AutoTLS registration
This update to libp2p and AutoTLS incorporates AutoNATv2 changes. It aims to reduce false-positive scenarios where AutoTLS certificate registration occurred before a publicly dialable multiaddr was available. This should result in fewer error logs during node start, especially when IPv6 and/or IPv4 NATs with UPnP/PCP/NAT-PMP are at play.
Overwrite option for files cp command
The ipfs files cp command has a --force option to allow it to overwrite existing files. Attempting to overwrite an existing directory results in an error.
Gateway now supports negative HTTP Range requests
The latest update to boxo/gateway adds support for negative HTTP Range requests, achieving [email protected] compatibility.
This provides greater interoperability with generic HTTP-based tools. For example, WebRecorder's https://replayweb.page/ can now directly load website snapshots from Kubo-backed URLs.
Option for filestore command to remove bad blocks
The experimental filestore command has a new option, --remove-bad-blocks, to verify objects in the filestore and remove those that fail verification.
This connection manager option controls how often connections are swept and potentially terminated. See the ConnMgr documentation.
Fix handling of EDITOR env var
The ipfs config edit command did not correctly handle the EDITOR environment variable when its value contains flags and arguments, i.e. EDITOR=emacs -nw. The command was treating the entire value of $EDITOR as the name of the editor command. This has been fixed to parse the value of $EDITOR into separate args, respecting shell quoting.