Skip to content

Releases: ipfs/kubo

v0.39.0

27 Nov 03:47
v0.39.0
2896aed

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

This release is an important step toward solving the DHT bottleneck for self-hosting IPFS on consumer hardware and home networks. The DHT sweep provider (now default) announces your content to the network without traffic spikes that overwhelm residential connections. Automatic UPnP recovery means your node stays reachable after router restarts without manual intervention.

New content becomes findable immediately after ipfs add. The provider system persists state across restarts, alerts you when falling behind, and exposes detailed stats for monitoring. This release also finalizes the deprecation of the legacy go-ipfs name.

🔦 Highlights

🎯 DHT Sweep provider is now the default

The Amino DHT Sweep provider system, introduced as experimental in v0.38, is now enabled by default (Provide.DHT.SweepEnabled=true).

What this means: All nodes now benefit from efficient keyspace-sweeping content announcements that reduce memory overhead and create predictable network patterns, especially for nodes providing large content collections.

Migration: The transition is automatic on upgrade. Your existing configuration is preserved:

  • If you explicitly set Provide.DHT.SweepEnabled=false in v0.38, you'll continue using the legacy provider
  • If you were using the default settings, you'll automatically get the sweep provider
  • To opt out and return to legacy behavior: ipfs config --json Provide.DHT.SweepEnabled false
  • Providers with medium to large datasets may need to adjust defaults; see Capacity Planning
  • When Routing.AcceleratedDHTClient is enabled, full sweep efficiency may not be available yet; consider disabling the accelerated client as sweep is sufficient for most workloads. See caveat 4.

New features available with sweep mode:

  • Detailed statistics via ipfs provide stat (see below)
  • Automatic resume after restarts with persistent state (see below)
  • Proactive alerts when reproviding falls behind (see below)
  • Better metrics for monitoring (provider_provides_total) (see below)
  • Fast optimistic provide of new root CIDs (see below)

For background on the sweep provider design and motivations, see Provide.DHT.SweepEnabled and Shipyard's blogpost Provide Sweep: Solving the DHT Provide Bottleneck.

⚡ Fast root CID providing for immediate content discovery

When you add content to IPFS, the sweep provider queues it for efficient DHT provides over time. While this is resource-efficient, other peers won't find your content immediately after ipfs add or ipfs dag import completes.

To make sharing faster, ipfs add and ipfs dag import now do an immediate provide of root CIDs to the DHT in addition to the regular queue (controlled by the new --fast-provide-root flag, enabled by default). This complements the sweep provider system: fast-provide handles the urgent case (root CIDs that users share and reference), while the sweep provider efficiently provides all blocks according to Provide.Strategy over time.

This closes the gap between command completion and content shareability: root CIDs typically become discoverable on the network in under a second (compared to 30+ seconds previously). The feature uses optimistic DHT operations, which are significantly faster with the sweep provider (now enabled by default).

By default, this immediate provide runs in the background without blocking the command. For use cases requiring guaranteed discoverability before the command returns (e.g., sharing a link immediately), use --fast-provide-wait to block until the provide completes.

Simple examples:

ipfs add file.txt                     # Root provided immediately, blocks queued for sweep provider
ipfs add file.txt --fast-provide-wait # Wait for root provide to complete
ipfs dag import file.car              # Same for CAR imports

Configuration: Set defaults via Import.FastProvideRoot (default: true) and Import.FastProvideWait (default: false). See ipfs add --help and ipfs dag import --help for more details and examples.

Fast root CID provide is automatically skipped when DHT routing is unavailable (e.g., Routing.Type=none or delegated-only configurations).

⏯️ Provider state persists across restarts

The Sweep provider now persists the reprovide cycle state and automatically resumes where it left off after a restart. This brings several improvements:

  • Persistent progress: The provider saves its position in the reprovide cycle to the datastore. On restart, it continues from where it stopped instead of starting from scratch.
  • Catch-up reproviding: If the node was offline for an extended period, all CIDs that haven't been reprovided within the configured reprovide interval are immediately queued for reproviding when the node starts up. This ensures content availability is maintained even after downtime.
  • Persistent provide queue: The provide queue is persisted to the datastore on shutdown. When the node restarts, queued CIDs are restored and provided as expected, preventing loss of pending provide operations.
  • Resume control: The resume behavior is controlled via Provide.DHT.ResumeEnabled (default: true). Set to false if you don't want to keep the persisted provider state from a previous run.

This feature improves reliability for nodes that experience intermittent connectivity or restarts.

📊 Detailed statistics with ipfs provide stat

The Sweep provider system now exposes detailed statistics through ipfs provide stat, helping you monitor provider health and troubleshoot issues.

Run ipfs provide stat for a quick summary, or use --all to see complete metrics including connectivity status, queue sizes, reprovide schedules, network statistics, operation rates, and worker utilization. For real-time monitoring, use watch ipfs provide stat --all --compact to observe changes in a 2-column layout. Individual sections can be displayed with flags like --network, --operations, or --workers.

For Dual DHT configurations, use --lan to view LAN DHT statistics instead of the default WAN DHT stats.

For more information, run ipfs provide stat --help or see the Provide Stats documentation, including Capacity Planning.

Note

Legacy provider (when Provide.DHT.SweepEnabled=false) shows basic statistics without flag support.

🔔 Slow reprovide warnings

Kubo now monitors DHT reprovide operations when Provide.DHT.SweepEnabled=true
and alerts you if your node is falling behind on reprovides.

When the reprovide queue consistently grows and all periodic workers are busy,
a warning displays with:

  • Queue size and worker utilization details
  • Recommended solutions: increase Provide.DHT.MaxWorkers or Provide.DHT.DedicatedPeriodicWorkers
  • Command to monitor real-time progress: watch ipfs provide stat --all --compact

The alert polls every 15 minutes (to avoid alert fatigue while catching
persistent issues) and only triggers after sustained growth across multiple
intervals. The legacy provider is unaffected by this change.

📊 Metric rename: provider_provides_total

The Amino DHT Sweep provider metric has been renamed from total_provide_count_total to provider_provides_total to follow OpenTelemetry naming conventions and maintain consistency with other kad-dht metrics (which use dot notation like rpc.inbound.messages, rpc.outbound.requests, etc.).

Migration: If y...

Read more

v0.39.0-rc1

17 Nov 22:01
v0.39.0-rc1

Choose a tag to compare

v0.39.0-rc1 Pre-release
Pre-release

This Release Preview was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.39.md
Release status: #10946

v0.38.2

30 Oct 02:44
v0.38.2
9fd105a

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

Kubo 0.38.2 is a quick patch release that improves retrieval, traces and memory usage.

🔦 Highlights

  • Updates boxo v0.35.1 with bitswap and HTTP retrieval fixes:
    • Fixed bitswap trace context not being passed to sessions, restoring observability for monitoring tools
    • Kubo now fetches from HTTP gateways that return errors in legacy IPLD format, improving compatibility with older providers
    • Better handling of rate-limited HTTP endpoints and clearer timeout error messages
  • Updates go-libp2p-kad-dht v0.35.1 with memory optimizations for nodes using Provide.DHT.SweepEnabled=true
  • Updates quic-go v0.55.0 to fix memory pooling where stream frames weren't returned to the pool on cancellation

For full release notes of 0.38, see 0.38.1.

📝 Changelog

Full Changelog

👨‍👩‍👧‍👦 Contributors

Contributor Commits Lines ± Files Changed
rvagg 1 +537/-481 3
Carlos Hernandez 9 +556/-218 11
Guillaume Michel 3 +139/-105 6
gammazero 8 +101/-97 14
Hector Sanjuan 1 +87/-28 5
Marcin Rataj 4 +57/-9 7
Marco Munizaga 2 +42/-14 7
Dennis Trautwein 2 +19/-7 7
Andrew Gillis 3 +3/-19 3
Rod Vagg 4 +12/-3 4
web3-bot 1 +2/-1 1
galargh 1 +1/-1 1

v0.38.1

08 Oct 21:34
v0.38.1
6bf52ae

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

Kubo 0.38 simplifies content announcement configuration, introduces an experimental sweeping DHT provider for efficient large-scale operations, and includes various performance improvements.

v0.38.1 includes fixes for migrations on Windows and Pebble datastore – if you are using either, make sure to use .1 release.

🔦 Highlights

🚀 Repository migration: simplified provide configuration

This release migrates the repository from version 17 to version 18, simplifying how you configure content announcements.

The old Provider and Reprovider sections are now combined into a single Provide section. Your existing settings are automatically migrated - no manual changes needed.

Migration happens automatically when you run ipfs daemon --migrate. For manual migration: ipfs repo migrate --to=18.

Read more about the new system below.

🧹 Experimental Sweeping DHT Provider

A new experimental DHT provider is available as an alternative to both the default provider and the resource-intensive accelerated DHT client. Enable it via Provide.DHT.SweepEnabled.

How it works: Instead of providing keys one-by-one, the sweep provider systematically explores DHT keyspace regions in batches.

Reprovide Cycle Comparison

The diagram shows how sweep mode avoids the hourly traffic spikes of Accelerated DHT while maintaining similar effectiveness. By grouping CIDs into keyspace regions and processing them in batches, sweep mode reduces memory overhead and creates predictable network patterns.

Benefits for large-scale operations: Handles hundreds of thousands of CIDs with reduced memory and network connections, spreads operations evenly to eliminate resource spikes, maintains state across restarts through persistent keystore, and provides better metrics visibility.

Monitoring and debugging: Legacy mode (SweepEnabled=false) tracks provider_reprovider_provide_count and provider_reprovider_reprovide_count, while sweep mode (SweepEnabled=true) tracks total_provide_count_total. Enable debug logging with GOLOG_LOG_LEVEL=error,provider=debug,dht/provider=debug to see detailed logs from either system.

Note

This feature is experimental and opt-in. In the future, it will become the default and replace the legacy system. Some commands like ipfs stats provide and ipfs routing provide are not yet available with sweep mode. Run ipfs provide --help for alternatives.

For configuration details, see Provide.DHT. For metrics documentation, see Provide metrics.

📊 Exposed DHT metrics

Kubo now exposes DHT metrics from go-libp2p-kad-dht, including total_provide_count_total for sweep provider operations and RPC metrics prefixed with rpc_inbound_ and rpc_outbound_ for DHT message traffic. See Kubo metrics documentation for details.

🚨 Improved gateway error pages with diagnostic tools

Gateway error pages now provide more actionable information during content retrieval failures. When a 504 Gateway Timeout occurs, users see detailed retrieval state information including which phase failed and a sample of providers that were attempted:

Improved gateway error page showing retrieval diagnostics

  • Gateway.DiagnosticServiceURL (default: https://check.ipfs.network): Configures the diagnostic service URL. When set, 504 errors show a "Check CID retrievability" button that links to this service with ?cid=<failed-cid> for external diagnostics. Set to empty string to disable.
  • Enhanced error details: Timeout errors now display the retrieval phase where failure occurred (e.g., "connecting to providers", "fetching data") and up to 3 peer IDs that were attempted but couldn't deliver the content, making it easier to diagnose network or provider issues.
  • Retry button on all error pages: Every gateway error page now includes a retry button for quick page refresh without manual URL re-entry.

🎨 Updated WebUI

The Web UI has been updated to v4.9 with a new Diagnostics screen for troubleshooting and system monitoring. Access it at http://127.0.0.1:5001/webui when running your local IPFS node.

Diagnostics: Logs Files: Check Retrieval Diagnostics: Retrieval Results
Diagnostics logs Retrieval check interface Retrieval check results
Debug issues in real-time by adjusting log level without restart (global or per-subsystem like bitswap) Check if content is available to other peers directly from Files screen Find out why content won't load or who is providing it to the network
Peers: Agent Versions Files: Custom Sorting
Peers with Agent Version File sorting options
Know what software peers run Find files faster with new sorting

Additional improvements include a close button in the file viewer, better error handling, and fixed navigation highlighting.

📌 Pin name improvements

ipfs pin ls <cid> --names now correctly returns pin names for specific CIDs (#10649, boxo#1035), RPC no longer incorrectly returns names from other pins (#10966), and pin names are now limited to 255 bytes for better cross-platform compatibility (#10981).

🛠️ Identity CID size enforcement and ipfs files write fixes

Identity CID size limits are now enforced

Identity CIDs use multihash 0x00 to embed data directly in the CID without hashing. This experimental optimization was designed for tiny data where a CID reference would be larger than the data itself, but without size limits it was easy to misuse and could turn into an anti-pattern that wastes resources and enables abuse. This release enforces a maximum of 128 bytes for identity CIDs - attempting to exceed this limit will return a clear error message.

  • ipfs add --inline-limit and --hash=identity now enforce the 128-byte maximum (error when exceeded)
  • ipfs files write prevents creation of oversized identity CIDs

Multiple ipfs files write bugs have been fixed

This release resolves several long-standing MFS issues: raw nodes now preserve their codec instead of being forced to dag-pb, append operations on raw nodes work correctly by converting to UnixFS when needed, and identity CIDs properly inherit the full CID prefix from parent directories.

📤 Provide Filestore and Urlstore blocks on write

Improvements to the providing system in the last release (provide blocks according to the configured Strategy) left out Filestore and [Urlstore](https:/...

Read more

v0.38.0

02 Oct 01:46
v0.38.0
34debcb

Choose a tag to compare

Warning

  • ⚠️ Windows users should update to 0.38.1 due to #11009
  • ⚠️ Pebble users should update to 0.38.1 due to #11011
  • 🟢 macOS and Linux are free to upgrade

Note

This release was brought to you by the Shipyard team.

Overview

Kubo 0.38.0 simplifies content announcement configuration, introduces an experimental sweeping DHT provider for efficient large-scale operations, and includes various performance improvements.

🔦 Highlights

🚀 Repository migration: simplified provide configuration

This release migrates the repository from version 17 to version 18, simplifying how you configure content announcements.

The old Provider and Reprovider sections are now combined into a single Provide section. Your existing settings are automatically migrated - no manual changes needed.

Migration happens automatically when you run ipfs daemon --migrate. For manual migration: ipfs repo migrate --to=18.

Read more about the new system below.

🧹 Experimental Sweeping DHT Provider

A new experimental DHT provider is available as an alternative to both the default provider and the resource-intensive accelerated DHT client. Enable it via Provide.DHT.SweepEnabled.

How it works: Instead of providing keys one-by-one, the sweep provider systematically explores DHT keyspace regions in batches.

Reprovide Cycle Comparison

The diagram shows how sweep mode avoids the hourly traffic spikes of Accelerated DHT while maintaining similar effectiveness. By grouping CIDs into keyspace regions and processing them in batches, sweep mode reduces memory overhead and creates predictable network patterns.

Benefits for large-scale operations: Handles hundreds of thousands of CIDs with reduced memory and network connections, spreads operations evenly to eliminate resource spikes, maintains state across restarts through persistent keystore, and provides better metrics visibility.

Monitoring and debugging: Legacy mode (SweepEnabled=false) tracks provider_reprovider_provide_count and provider_reprovider_reprovide_count, while sweep mode (SweepEnabled=true) tracks total_provide_count_total. Enable debug logging with GOLOG_LOG_LEVEL=error,provider=debug,dht/provider=debug to see detailed logs from either system.

Note

This feature is experimental and opt-in. In the future, it will become the default and replace the legacy system. Some commands like ipfs stats provide and ipfs routing provide are not yet available with sweep mode. Run ipfs provide --help for alternatives.

For configuration details, see Provide.DHT. For metrics documentation, see Provide metrics.

📊 Exposed DHT metrics

Kubo now exposes DHT metrics from go-libp2p-kad-dht, including total_provide_count_total for sweep provider operations and RPC metrics prefixed with rpc_inbound_ and rpc_outbound_ for DHT message traffic. See Kubo metrics documentation for details.

🚨 Improved gateway error pages with diagnostic tools

Gateway error pages now provide more actionable information during content retrieval failures. When a 504 Gateway Timeout occurs, users see detailed retrieval state information including which phase failed and a sample of providers that were attempted:

Improved gateway error page showing retrieval diagnostics

  • Gateway.DiagnosticServiceURL (default: https://check.ipfs.network): Configures the diagnostic service URL. When set, 504 errors show a "Check CID retrievability" button that links to this service with ?cid=<failed-cid> for external diagnostics. Set to empty string to disable.
  • Enhanced error details: Timeout errors now display the retrieval phase where failure occurred (e.g., "connecting to providers", "fetching data") and up to 3 peer IDs that were attempted but couldn't deliver the content, making it easier to diagnose network or provider issues.
  • Retry button on all error pages: Every gateway error page now includes a retry button for quick page refresh without manual URL re-entry.

🎨 Updated WebUI

The Web UI has been updated to v4.9 with a new Diagnostics screen for troubleshooting and system monitoring. Access it at http://127.0.0.1:5001/webui when running your local IPFS node.

Diagnostics: Logs Files: Check Retrieval Diagnostics: Retrieval Results
Diagnostics logs Retrieval check interface Retrieval check results
Debug issues in real-time by adjusting log level without restart (global or per-subsystem like bitswap) Check if content is available to other peers directly from Files screen Find out why content won't load or who is providing it to the network
Peers: Agent Versions Files: Custom Sorting
Peers with Agent Version File sorting options
Know what software peers run Find files faster with new sorting

Additional improvements include a close button in the file viewer, better error handling, and fixed navigation highlighting.

📌 Pin name improvements

ipfs pin ls <cid> --names now correctly returns pin names for specific CIDs (#10649, boxo#1035), RPC no longer incorrectly returns names from other pins (#10966), and pin names are now limited to 255 bytes for better cross-platform compatibility (#10981).

🛠️ Identity CID size enforcement and ipfs files write fixes

Identity CID size limits are now enforced

Identity CIDs use multihash 0x00 to embed data directly in the CID without hashing. This experimental optimization was designed for tiny data where a CID reference would be larger than the data itself, but without size limits it was easy to misuse and could turn into an anti-pattern that wastes resources and enables abuse. This release enforces a maximum of 128 bytes for identity CIDs - attempting to exceed this limit will return a clear error message.

  • ipfs add --inline-limit and --hash=identity now enforce the 128-byte maximum (error when exceeded)
  • ipfs files write prevents creation of oversized identity CIDs

Multiple ipfs files write bugs have been fixed

This release resolves several long-standing MFS issues: raw nodes now preserve their codec instead of being forced to dag-pb, append operations on raw nodes work correctly by converting to UnixFS when needed, and identity CIDs properly inherit the full CID prefix from parent directories.

📤 Provide Filestore and Urlstore blocks on write

Improvements to the providing system in the last release (provide blocks according to the configured [Strategy](https://github.c...

Read more

v0.38.0-rc2

27 Sep 02:49
v0.38.0-rc2
070177b

Choose a tag to compare

v0.38.0-rc2 Pre-release
Pre-release

This release was brought to you by the Shipyard team.

v0.38.0-rc1

19 Sep 21:17
v0.38.0-rc1
d4b446b

Choose a tag to compare

v0.38.0-rc1 Pre-release
Pre-release

This release was brought to you by the Shipyard team.

v0.37.0

27 Aug 20:03
v0.37.0
6898472

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

Kubo 0.37.0 introduces embedded repository migrations, gateway resource protection, complete AutoConf control, improved reprovider strategies, and anonymous telemetry for better feature prioritization. This release significantly improves memory efficiency, network configuration flexibility, and operational reliability while maintaining full backward compatibility.

🔦 Highlights

🚀 Repository migration from v16 to v17 with embedded tooling

This release migrates the Kubo repository from version 16 to version 17. Migrations are now built directly into the binary - completing in milliseconds without internet access or external downloads.

ipfs daemon --migrate performs migrations automatically. Manual migration: ipfs repo migrate --to=17 (or --to=16 --allow-downgrade for compatibility). Embedded migrations apply to v17+; older versions still require external tools.

Legacy migration deprecation: Support for legacy migrations that download binaries from the internet will be removed in a future version. Only embedded migrations for the last 3 releases will be supported. Users with very old repositories should update in stages rather than skipping multiple versions.

🚦 Gateway concurrent request limits and retrieval timeouts

New configurable limits protect gateway resources during high load:

  • Gateway.RetrievalTimeout (default: 30s): Maximum duration for content retrieval. Returns 504 Gateway Timeout when exceeded - applies to both initial retrieval (time to first byte) and between subsequent writes.
  • Gateway.MaxConcurrentRequests (default: 4096): Limits concurrent HTTP requests. Returns 429 Too Many Requests when exceeded. Protects nodes from traffic spikes and resource exhaustion, especially useful behind reverse proxies without rate-limiting.

New Prometheus metrics for monitoring:

  • ipfs_http_gw_concurrent_requests: Current requests being processed
  • ipfs_http_gw_responses_total: HTTP responses by status code
  • ipfs_http_gw_retrieval_timeouts_total: Timeouts by status code and truncation status

Tuning tips:

  • Monitor metrics to understand gateway behavior and adjust based on observations
  • Watch ipfs_http_gw_concurrent_requests for saturation
  • Track ipfs_http_gw_retrieval_timeouts_total vs success rates to identify timeout patterns indicating routing or storage provider issues

🔧 AutoConf: Complete control over network defaults

Configuration fields now support ["auto"] placeholders that resolve to network defaults from AutoConf.URL. These defaults can be inspected, replaced with custom values, or disabled entirely. Previously, empty configuration fields like Routing.DelegatedRouters: [] would use hardcoded defaults - this system makes those defaults explicit through "auto" values. When upgrading to Kubo 0.37, custom configurations remain unchanged.

New --expand-auto flag shows resolved values for any config field:

ipfs config show --expand-auto                      # View all resolved endpoints
ipfs config Bootstrap --expand-auto                 # Check specific values
ipfs config Routing.DelegatedRouters --expand-auto
ipfs config DNS.Resolvers --expand-auto

Configuration can be managed via:

  • Replace "auto" with custom endpoints or set [] to disable features
  • Switch modes with --profile=autoconf-on|autoconf-off
  • Configure via AutoConf.Enabled and custom manifests via AutoConf.URL
# Enable automatic configuration
ipfs config profiles apply autoconf-on

# Or manually set specific fields
ipfs config Bootstrap '["auto"]'
ipfs config --json DNS.Resolvers '{".": ["https://dns.example.com/dns-query"], "eth.": ["auto"]}'

Organizations can host custom AutoConf manifests for private networks. See AutoConf documentation and format spec at https://conf.ipfs-mainnet.org/

🗑️ Clear provide queue when reprovide strategy changes

Changing Reprovider.Strategy and restarting Kubo now automatically clears the provide queue. Only content matching the new strategy will be announced.

Manual queue clearing is also available:

  • ipfs provide clear - clear all queued content announcements

Note

Upgrading to Kubo 0.37 will automatically clear any preexisting provide queue. The next time Reprovider.Interval hits, Reprovider.Strategy will be executed on a clean slate, ensuring consistent behavior with your current configuration.

🪵 Revamped ipfs log level command

The ipfs log level command has been completely revamped to support both getting and setting log levels with a unified interface.

New: Getting log levels

  • ipfs log level - Shows default level only
  • ipfs log level all - Shows log level for every subsystem, including default level
  • ipfs log level foo - Shows log level for a specific subsystem only
  • Kubo RPC API: POST /api/v0/log/level?arg=<subsystem>

Enhanced: Setting log levels

  • ipfs log level foo debug - Sets "foo" subsystem to "debug" level
  • ipfs log level all info - Sets all subsystems to "info" level (convenient, no escaping)
  • ipfs log level '*' info - Equivalent to above but requires shell escaping
  • ipfs log level foo default - Sets "foo" subsystem to current default level

The command now provides full visibility into your current logging configuration while maintaining full backward compatibility. Both all and * work for specifying all subsystems, with all being more convenient since it doesn't require shell escaping.

🧷 Named pins in ipfs add command

Added --pin-name flag to ipfs add for assigning names to pins.

$ ipfs add --pin-name=testname cat.jpg
added bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi cat.jpg

$ ipfs pin ls --names
bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi recursive testname

📝 New IPNS publishing options

Added support for controlling IPNS record publishing strategies with new command flags and configuration.

New command flags:

# Publish without network connectivity (local datastore only)
ipfs name publish --allow-offline /ipfs/QmHash

# Publish without DHT connectivity (uses local datastore and HTTP delegated publishers)
ipfs name publish --allow-delegated /ipfs/QmHash

Delegated publishers configuration:

Ipns.DelegatedPublishers configures HTTP endpoints for IPNS publishing. Supports "auto" for network defaults or custom HTTP endpoints. The --allow-delegated flag enables publishing through these endpoints without requiring DHT connectivity, useful for nodes behind restrictive networks or during testing.

🔢 Custom sequence numbers in ipfs name publish

Added --sequence flag to ipfs name publish for setting custom sequence numbers in IPNS records. This enables advanced use cases like manually coordinating updates across multiple nodes. See ipfs name publish --help for details.

⚙️ Reprovider.Strategy is now consistently respected

Prior to this version, files added, blocks received etc. were "provided" to the network (announced on the DHT) regardless of the "reproviding strategy" setting. For example:

  • Strategy set to "pi...
Read more

v0.37.0-rc1

21 Aug 21:02
v0.37.0-rc1
255bc88

Choose a tag to compare

v0.37.0-rc1 Pre-release
Pre-release

This release was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.37.md
Release status: #10867

v0.36.0

14 Jul 18:59
v0.36.0
37b8411

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

🔦 Highlights

HTTP Retrieval Client Now Enabled by Default

This release promotes the HTTP Retrieval client from an experimental feature to a standard feature that is enabled by default. When possible, Kubo will retrieve blocks over plain HTTPS (HTTP/2) without any extra user configuration.

See HTTPRetrieval for more details.

Bitswap Broadcast Reduction

The Bitswap client now supports broadcast reduction logic, which is enabled by default. This feature significantly reduces the number of broadcast messages sent to peers, resulting in lower bandwidth usage during load spikes.

The overall logic works by sending to non-local peers only if those peers have previously replied that they want data blocks. To minimize impact on existing workloads, by default, broadcasts are still always sent to peers on the local network, or the ones defined in Peering.Peers.

At Shipyard, we conducted A/B testing on our internal Kubo staging gateway with organic CID requests to ipfs.io. While these results may not exactly match your specific workload, the benefits proved significant enough to make this feature default. Here are the key findings:

  • Dramatic Resource Usage Reduction: Internal testing demonstrated a reduction in Bitswap broadcast messages by 80-98% and network bandwidth savings of 50-95%, with the greatest improvements occurring during high traffic and peer spikes. These efficiency gains lower operational costs of running Kubo under high load and improve the IPFS Mainnet (which is >80% Kubo-based) by reducing ambient traffic for all connected peers.
  • Improved Memory Stability: Memory stays stable even during major CID request spikes that increase peer count, preventing the out-of-memory (OOM) issues found in earlier Kubo versions.
  • Data Retrieval Performance Remains Strong: Our tests suggest that Kubo gateway hosts with broadcast reduction enabled achieve similar or better HTTP 200 success rates compared to version 0.35, while maintaining equivalent or higher want-have responses and unique blocks received.

For more information about our A/B tests, see kubo#10825.

To revert to the previous behavior for your own A/B testing, set Internal.Bitswap.BroadcastControl.Enable to false and monitor relevant metrics (ipfs_bitswap_bcast_skips_total, ipfs_bitswap_haves_received, ipfs_bitswap_unique_blocks_received, ipfs_bitswap_wanthaves_broadcast, HTTP 200 success rate).

For a description of the configuration items, see the documentation of Internal.Bitswap.BroadcastControl.

Update go-log to v2

go-log v2 has been out for quite a while now and it's time to deprecate v1.

  • Replace all use of go-log with go-log/v2
  • Makes /api/v0/log/tail useful over HTTP
  • Fixes ipfs log tail
  • Removes support for ContextWithLoggable as this is not needed for tracing-like functionality

Kubo now uses AutoNATv2 as a client

This Kubo release starts utilizing AutoNATv2 client functionality. go-libp2p v0.42 supports and depends on both AutoNATv1 and v2, and Autorelay feature continues to use v1. go-libp2p v0.43+ will discontinue internal use of AutoNATv1. We will maintain support for both v1 and v2 until then, though v1 will gradually be deprecated and ultimately removed.

Smarter AutoTLS registration

This update to libp2p and AutoTLS incorporates AutoNATv2 changes. It aims to reduce false-positive scenarios where AutoTLS certificate registration occurred before a publicly dialable multiaddr was available. This should result in fewer error logs during node start, especially when IPv6 and/or IPv4 NATs with UPnP/PCP/NAT-PMP are at play.

Overwrite option for files cp command

The ipfs files cp command has a --force option to allow it to overwrite existing files. Attempting to overwrite an existing directory results in an error.

Gateway now supports negative HTTP Range requests

The latest update to boxo/gateway adds support for negative HTTP Range requests, achieving [email protected] compatibility.
This provides greater interoperability with generic HTTP-based tools. For example, WebRecorder's https://replayweb.page/ can now directly load website snapshots from Kubo-backed URLs.

Option for filestore command to remove bad blocks

The experimental filestore command has a new option, --remove-bad-blocks, to verify objects in the filestore and remove those that fail verification.

ConnMgr.SilencePeriod configuration setting exposed

This connection manager option controls how often connections are swept and potentially terminated. See the ConnMgr documentation.

Fix handling of EDITOR env var

The ipfs config edit command did not correctly handle the EDITOR environment variable when its value contains flags and arguments, i.e. EDITOR=emacs -nw. The command was treating the entire value of $EDITOR as the name of the editor command. This has been fixed to parse the value of $EDITOR into separate args, respecting shell quoting.

📦️ Important dependency updates

  • update go-libp2p to v0.42.0
  • update go-libp2p-kad-dht to v0.33.0
  • update boxo to v0.33.0 (incl. v0.32.0)
  • update gateway-conformance to v0.8
  • update p2p-forge/client to v0.6.0
  • update github.com/cockroachdb/pebble/v2 to v2.0.6 for Go 1.25 support

📝 Changelog

Full Changelog
Read more