6129 words
31 minutes
Embark: Apache Iggy
2026-02-20
Essays
Lishui, China
Summary

This article records my experience of contributing to an open source project Iggy under The Apache Software Foundation , while also sharing some of my personal insights.

About Apache

The Apache Software Foundation, a U.S. 501(c)(3) non-profit foundation committed to software for the public good with the “Community over Code” ethos, manages hundreds1 of influential open source projects that underpin core infrastructure in cloud, big data and web industries, based on community and sponsors, boasting an ecosystem valued at over $30 Billion USD .2

About Iggy

Apache Iggy, an incubating open source message streaming project at The ASF, is a Rust built, ultra low latency persistent platform that processes millions of messages per second to power high efficiency rt data pipelines and cloud native streaming workloads at laser speed.

When Editing

The syntax has been corrected by LLM. Article done after committing some codes.


How I Discovered#

The ASF#

In 2021, I learned about Log4j3 (CVE-2021-44228)4CVSSv3 10.0/10.0 vulnerability5.
I’m sorry to have gotten to know you this way.

The Apache Iggy#

This brings me to the situation I had in early November 2025.

First, while I already knew how to use proxies in the 5th grade, and “could write”6Java code in the 6th grade, I made virtually not so many contributions to the open source community from 2021 to 2024. The reason is quite straightforward, I was obsessed with Minecraft during the 2021 academic year, immersed in Genshin Impact (Asia Server) in 2022, and took up Arknights in 2023. These three distractions completely squandered most of my time that could have been spent tinkering with distros and coding langs7.

Despite my early start in tech, I lack extensive experience in multiple areas, not just development experience, but also collaboration experience. This deficiency is already evident in my use8 of Git, and more shortcomings will quickly surface once I engage in actual development. These shortcomings are not merely limited to not knowing how to use Docker or distributed compilers , but extend to basic coding standards. Thus, my needs became clear from now on:

  1. A small or mid scale open source project, better led by an organization or enterprise, to correct my various bad habits;
  2. A project in its early/mid-stage of development with active maintainers, Learn effective communication;
  3. The project itself should have a steady pace of progress;
  4. The project language I prefer to use.

searching-iggy.webp

This led me to Apache. After reading their charter and mission, finding their GitHub account, and conducting searches, I discovered that Apache Iggy was the only project that fully met all the above requirements.

apache
/
iggy
Waiting for api.github.com...
00K
0K
0K
Waiting...

Ask for Assignment#

Pick Up One Issue#

The Apache Software Foundation encourages people to contribute. Volunteers who are unsure where to start can take a look at the project’s good first issues. That’s how I began too: I picked the oldest unclaimed issue. The reason was simple. As a Chinese high school student, I had less than 20 hours available per weekend, and my skills were rusty, so I was worried the work would drag on for a long time.

Contributor Ladder

First, you are a contributor, then you are a committer, and finally you are a PMC member or higher. You may notice that there are also PPMC, which is member of Podling Project Management Committee , belongs to Apache Incubator.

GitHub is Not Necessary

Apache projects are NOT necessarily hosted on GitHub. Many long established Apache projects still use Apache SVN as their primary version control system and Bugzilla for issue tracking, like Apache HTTP Server Project, rather than GitHub’s Issues and Pull Requests mechanism.

Ask the Collaborators#

I figured an issue that had been around for a long time probably meant it was important but not urgent for the maintainers, which would alleviate the mess on them from my slow progress.

Piotr Gankiewicz
@spetz. Software Engineer, Iggy Founder, Apache PPMC.
Hubert Gruszecki
@hubcio. Another crab enthusiast, Apache PPMC.

By checking who merged the pull requests, we could easily tell who has write access. I made an attempt sending a request to @hubcio . He agreed promptly. Later, @spetz also joined the code review. @hubcio and @spetz are highly efficient and responsible collaborators. They’re both Members, and PPMCs.

asking_for_assign.webp

Communication Methods

Email communication is also recommended at times, which might be PUBLIC. You can receive real-time messages from mailing lists by sending a subscription confirmation. For instance:

mail.webp


Get On Work#

Preparation#

Iggy can be deployed with Docker easily:

Deploy Iggy with Docker Directly, According to the Official Documentation
docker pull apache/iggy:latest
docker run --cap-add=SYS_NICE --security-opt seccomp:unconfined --ulimit memlock=-1:-1 apache/iggy:latest

Just like when making contributions normally, first read through the Code of Conduct, Contributing Guidelines, and then make a fork, syncing. Besides, Iggy also provides a cli.

Fish Session
2 collapsed lines
git clone git@github.com:Svecco/iggy.git
cd ./iggy/ && ssh-add ~/.ssh/id_ed25519
git checkout -b 46-size-logs upstream/master
cargo install iggy-cli && iggy --help

Nearly all medium or large modern Rust projects adopt the workspace mechanism. I’m not sure why so many Rust learning materials don’t cover it as a key topic, including the Programming Rust, 2nd Edition I have right now.

You'd Better Prepare Some Time Waiting for Cargo Building
4 collapsed lines
╭─ $ [svecco] ~/A/iggy git!(46-size-logs)
╰─ > cargo clean
Removed 65558 files, 52.1GiB total
╭─ $ [svecco] ~/A/iggy git!(46-size-logs)
╰─ > time sh -c 'cargo build --all-features --all-targets --workspace --release 2>&1 | tail -1'
Finished `release` profile [optimized] target(s) in 3m 18s
2 collapsed lines
________________________________________________________
Executed in 198.65 secs fish external
usr time 51.93 mins 407.00 micros 51.93 mins
sys time 1.46 mins 61.00 micros 1.46 mins
5 collapsed lines
╭─ $ [svecco] ~/A/iggy git!(46-size-logs)
╰─ > lscpu | grep "AMD"
Vendor ID: AuthenticAMD
Model name: ***AMD Ryzen 9 9950X*** 16-Core Processor
Virtualization: AMD-V

From the project specifications, you can also see how to format your code:

About Clippy and Check

Clippy is a superset of check. Usually, if Clippy passes, you don’t need to run check.

You Can Use Prek to Handle Them By Once
cargo fmt --all && cargo sort --workspace && cargo machete
cargo clippy --all-targets --all-features -- -D warnings
bash ./scripts/ci/licenses-list.sh --update --fix
bash ./scripts/ci/trailing-whitespace.sh --check --fix
# cargo check --all-targets --all-features
Use This To Run Benchmarks
cargo run --bin iggy-bench -- -T 50000MB pp -p 5 quic
If You Prefer Web UI
set -x IGGY_ROOT_USERNAME iggy && set -x IGGY_ROOT_PASSWORD iggy && set -x RUST_LOG debug && cargo run --bin iggy-server -- --with-default-root-credentials
set -x PUBLIC_IGGY_API_URL http://127.0.0.1:3000 && cd web/ && pnpm run dev #If env wasn't set before, please run pnpm/npm (or sth else) install

webui.webp

Learn the Structure#

Iggy data model, aligned with mainstream message queue, but has own naming convention.9

ConceptCore Description
StreamTop-level namespace isolating business/project message data.
TopicMessage classification under Stream; maps to message queue; one Stream has many Topics.
PartitionParallel unit under Topic; determines concurrency; one Topic has many Partitions.
MessageSmallest data carrier; contains payload, ID, timestamp, offset, custom headers.
ConsumerMessage consumption endpoint; supports pull by offset, timestamp, batch; auto-commits offsets.

Iggy’s core business data model and interaction logic. It defines the nested hierarchy from the top-level namespace Stream down to the atomic data unit Message, and standardizes the core send/poll operations between producers and consumers.

BA{Hierarchy:StreamTopicPartitioniMessagei,j(i,jN+)Flow:ProducerSend(s,t,p,m)PartitionpConsumercPoll(s,t,p,o,k){Messagep,o,}\mathbf{BA} \equiv \begin{cases} \mathbf{Hierarchy}: & \mathbf{Stream} \supset \mathbf{Topic} \supset \mathbf{Partition}_i \supset \mathbf{Message}_{i,j} \\ & (i,j \in \mathbb{N}^+) \\[0.4em] \mathbf{Flow}: & \mathbf{Producer} \xrightarrow{\text{Send}(s,t,p,m)} \mathbf{Partition}_p \\ & \mathbf{Consumer}_c \xrightarrow{\text{Poll}(s,t,p,o,k)} \left\{ \mathbf{Message}_{p,o}, \dots \right\} \end{cases}

Iggy’s layered technical architecture. It maps the complete end-to-end data path from multi-language SDK clients through the transport layer and core processing engine to the persistent storage layer, along with the officially supported transport protocol options.

TA{Layers:ClientSDKRTransport[4-prot]TransportDCore[MgmtHBRouter]CorePStorage[SegIdxMeta]Constraints:C(Transport)(TCPHTTPQUICWS)\mathbf{TA} \equiv \begin{cases} \mathbf{Layers}: & \mathbf{Client_{SDK}} \xrightarrow{\mathcal{R}} \mathbf{Transport_{[4\text{-}prot]}} \\ & \mathbf{Transport} \xrightarrow{\mathcal{D}} \mathbf{Core_{[Mgmt|HB|Router]}} \\ & \mathbf{Core} \xrightarrow{\mathcal{P}} \mathbf{Storage_{[Seg|Idx|Meta]}} \\[0.4em] \mathbf{Constraints}: & \mathcal{C}(\mathbf{Transport}) \equiv \left( \mathbf{TCP} \lor \mathbf{HTTP} \lor \mathbf{QUIC} \lor \mathbf{WS} \right) \end{cases}

Iggy’s two core end-to-end workflows: message production and consumption. It details the full lifecycle of a message, covering client authentication, routing and persistent append for production, as well as subscription, batch fetching and offset commit for consumption.

WorkflowWFProdWFConsWorkflowProdProducerConnectTransportAuth(u,p)CoreRoute(s,t,p)PartitionAppend(m)StorageSegWorkflowConsConsumerSubscribe(s,t)CorePoll(p,o,k)StorageSegRead(o,k){m1,,mk}Commit(o+k)Core\mathbf{Workflow} \equiv \mathbf{WF_{Prod}} \land \mathbf{WF_{Cons}} \\[0.6em] \mathbf{Workflow_{Prod}} \equiv \mathbf{Producer} \xrightarrow{\text{Connect}} \mathbf{Transport} \xrightarrow{\text{Auth}(u,p)} \mathbf{Core} \\ \quad \xrightarrow{\text{Route}(s,t,p)} \mathbf{Partition} \xrightarrow{\text{Append}(m)} \mathbf{Storage_{Seg}} \\[0.6em] \mathbf{Workflow_{Cons}} \equiv \mathbf{Consumer} \xrightarrow{\text{Subscribe}(s,t)} \mathbf{Core} \xrightarrow{\text{Poll}(p,o,k)} \mathbf{Storage_{Seg}} \\ \quad \xrightarrow{\text{Read}(o,k)} \left\{ m_1, \dots, m_k \right\} \xrightarrow{\text{Commit}(o+k)} \mathbf{Core}
PathWork
core/server/src/configsConfig management
core/server/src/logLogging management

And for the issue, for example, server.toml controls the default configuration file, while defaults.rs handles the parsing of default values, and validators.rs is responsible for validating data integrity preventing issues like rotation times approaching zero10, for instance.

In the integration phase, as the name, we perform integration testing, which fully and independently implements features testing. Relying solely on unit testing is insufficient for complex projects involving overall system operation.

gitlog.webp

Implementing Feats#

core/server/src/log/logger.rs:Mainly Functions
fn calculate_max_files(max_total_size, max_file_size,) -> usize { ... }
fn install_log_rotation_handler(&self, config, logs_path,) -> Option<()> { ... }
fn run_log_rotation_loop(path, retention, ... , check_interval, should_stop, rx,) { ... }
fn read_log_files(logs_path) -> Vec<(fs::DirEntry, SystemTime, Duration, u64)> { ... }
fn cleanup_log_files(logs_path, retention, max_total_size, max_file_size,) { ... }
impl Drop for Logging { fn drop(&mut self) { ... } }

Logically, the implementation is straightforward, divide max_total_size by max_file_size to get the upper limit of the quantity, then read the rotation_check_interval and retention cycles configured by the user. After that, push file entries to Vec<(fs::DirEntry, SystemTime, Duration, u64)>, after calculating, for deletion according to these scheduled timings, and finally remove a batch.

core/integration/tests/server/scenarios/log_rotation_scenario.rs:Mainly Functions
fn is_valid_iggy_log_file(file_name,) -> bool { ... }
async fn run(client_factory, log_dir, present_log_config,) { ... }
async fn init_valid_client(client_factory,) -> Result<IggyClient, String> { ... }
async fn generate_enough_logs(client,) -> Result<(), String> { ... }
async fn validate_log_rotation_rules(log_dir, pre_config,) -> Result<(), String> { ... }
async fn nocapture_observer(log_path, title, done,) -> () { ... }
873b63965998e07ed7b50994286d1397b6359dd5: server.toml -> config.toml 96%
# Maximum size of a single log file before rotation occurs. When a log
4 collapsed lines
# file reaches this size, it will be rotated (closed and a new file
# created). This setting works together with max_total_size to control
# log storage. You can set it to 0 to enable unlimited size of single
# log, but all logs will be written to a single file, thus disabling
# log rotation. Please configure 0 with caution, esp. RUST_LOG > debug
max_file_size = "500 MB"
# Maximum size of the log files before rotation.
max_size = "512 MB"
5 collapsed lines
# Maximum total size of all log files. When this size is reached,
# the oldest log files will be deleted first. Set it to 0 to allow
# an unlimited number of archived logs. This does not disable time
# based log rotation or per-log-file size limits.
max_total_size = "4 GB"
3 collapsed lines
# Time interval for checking log rotation status. Avoid less than 1s.
rotation_check_interval = "1 h"
3 collapsed lines
# Time to retain log files before deletion. Avoid less than 1s, too.
retention = "7 days"

The above are configuration level changes, and you can also see what has been implemented from here. Of course, for more specific details, you may as well directly use the commit hash to check the git log to find out about other additions and deletions. Since the issues people encounter are diverse, and log rotation is not highly technical, going into detail here would sacrifice general applicability. So, I won’t elaborate on the general implementation details here, instead, I will choose to pick things that I find quite interesting up later.

Git Console
git show 873b63965998e07ed7b50994286d1397b6359dd5 --stat
Git Log Search Output in Stat
╭─ $ [svecco] ~/A/iggy git!(master)
╰─ > git show 873b63965998e07ed7b50994286d1397b6359dd5 --stat
28 collapsed lines
commit 873b63965998e07ed7b50994286d1397b6359dd5
Author: Svecco <chenrui@sve.moe>
Date: Mon Feb 2 17:43:48 2026 +0800
feat(server): implement log rotation based on size and retention (#2452)
- implemented log rotation based on size and retention as the title;
- implemented configurable attributes and imported breaking changes;
- added units and integration test in logger.rs and integration mod;
- added documentations and imported new dependencies, etc.
Cargo.lock | 21 +
DEPENDENCIES.md | 2 +
core/common/src/utils/byte_size.rs | 7 +
core/common/src/utils/duration.rs | 29 ++
core/integration/src/test_server.rs | 4 +-
core/integration/tests/S/S/log_rotation_scenario.rs | 382 +++++++++++++++++
core/integration/tests/server/scenarios/mod.rs | 1 +
core/integration/tests/server/specific.rs | 5 +-
core/server/Cargo.toml | 2 +
core/server/config.toml | 22 +-
core/server/src/configs/defaults.rs | 9 +-
core/server/src/configs/displays.rs | 8 +-
core/server/src/configs/system.rs | 7 +-
core/server/src/configs/validators.rs | 47 ++-
core/server/src/log/logger.rs | 475 +++++++++++++++++++++-
foreign/cpp/tests/e2e/server.toml | 26 +-
16 files changed, 1009 insertions(+), 38 deletions(-)

Panicked OOM Code 12 with 44GiB RAM Empty?!#

Problem Reproduction
git reset --hard ff6695ba589656acef68da534b4af55dda452c80 && cargo build --package server
CPU_ALLOCATION="<more than your PHYSICAL cores>" RUST_BACKTRACE=1 cargo run --bin server
Couldn’t Load Data From Disk?

That’s because the process attempted to read an incompatible system metadata file under the ./local_data/ directory. Just remove the ./local_data/ directory and try again.

Regarding the Num of Cores

On my device, testing shows that cpu_allocation = "19" is the limit, with single one 32 logical cores, distributed off. Might require a machine with many cores no, but SMT to trigger.11

Generally speaking, this situation occurs because the requested memory allocation is too large. The malloc function is overwhelmed and just kills the process (tears included). But you can check out, and, find there may something incorrect, but you don’t know exactly what it is:

ulimit -m && ulimit -v && cat /proc/sys/vm/overcommit_memory
unlimited, unlimited, 0

That’s very strange for me. There are no external restrictions, on the contrary, resources are sufficient, yet it still fails to start, So this is an internal server issue?

Flood Full of These Panics
2026-02-13T10:08:28.268923Z ERROR main iggy_server: Server shutting down due to shard failure. (shutdown took 31 ms)
Error: ShardFailure { message: "Shard 22 panicked: called `Result::unwrap()` on an `Err` value: Os { code: 12, kind: OutOfMemory, message: \"Cannot allocate memory\" }" }
When Editing the Essay: free -h | head -n 2
total used free shared buff/cache available
Mem: 46 Gi 20 Gi 772 Mi 199 Mi 24 Gi 26 Gi

Since the error is occurring with the shard, let’s look into the code that implements the shard functionality. Let’s check how the shard allocated.

core/server/src/configs/sharding.rs::CpuAllocation
impl CpuAllocation {
pub fn to_shard_set(&self) -> HashSet<usize> {
match self {
CpuAllocation::All => {
let available_cpus = available_parallelism()
.expect("Failed to get num of cores")
.get();
(0..available_cpus).collect()
}
CpuAllocation::Count(count) => (0..*count).collect(),
CpuAllocation::Range(start, end) => (*start..*end).collect(),
}
}
}

This code that executes kernel allocation seems harmless enough at first glance, yet there appears to be something off about it. The cores are allocated via available_parallelism(), man it, see what can we get from the documentations.

Limitations12 / pub fn available_parallelism() -> Result<NonZero<usize>>#

The purpose of this API is to provide an easy and portable way toquery the default amount of parallelism the program should use.Among other things it does not expose information on NUMA regions,does not account for differences in (co)processor capabilities orcurrent system load, and will not modify the program’s global statein order to more accurately query the amount of available parallelism.

NUMA regions”? What is “NUMA”?

In early computer systems, all CPU accessed memory through a single bus, an architecture known as SMP13. All CPU were equal, with no master slave relationship. As the number of processors increased, the system bus became a critical system bottleneck, leading to significant latency in communication between processors and memory.

From the Hardware Architecture

In the NUMA architecture, CPU are divided into multiple NUMA Nodes. Each node has its own independent memory space and PCIe bus subsystem. Intercommunication between CPU is achieved via the QPI bus.14

hwloc.webp

The speed at which a CPU accesses memory from different node types varies: access to the local node is the fastest, while access to remote nodes is the slowest. In other words, memory access speed depends on the distance to the node, the greater the distance, the slower the access. This is why it is called NUMA. The memory access distance is referred to as Node Distance.

This architecture effectively solves the performance issues caused by large scale CPU expansion under the SMP model.15 This means: This piece of code directly maps abstract CPU allocation rules to a set of logical CPU core IDs, where cpu_allocation = "all" generates a set with a size equal to the number of logical cores reported by the system. In this scenario, 32 shards are created instead of 16, which triggers two cascading issues: first, multiple shards are bound to the SMT threads of the same physical core, resulting in resource contention. Second, the 32 shards, combined with the pre-allocated memory pool, generate a massive amount of instantaneous memory requests. This value exceeds the kernel’s heuristic memory overcommit threshold vm.overcommit_memory=0, thereby triggering Code 12 and causing shard crashes.16

Now, we can examine the source code to see why the documentation states this.

rustlib::std/src/sys/thread/unix.rs::available_parallelism
pub fn available_parallelism() -> io::Result<NonZero<usize>> {
4 collapsed lines
cfg_select! {
any(target_os = "linux") => {
#[cfg(any(target_os = "android", target_os = "linux"))]
{
quota = cgroups::quota().max(1);
let mut set: libc::cpu_set_t = unsafe { mem::zeroed() };
unsafe {
11 collapsed lines
if libc::sched_getaffinity(0, size_of::<libc::cpu_set_t>(), &mut set) == 0 {
let count = libc::CPU_COUNT(&set) as usize;
let count = count.min(quota);
if let Some(count) = NonZero::new(count) {
return Ok(count)
}
}
}
}
match unsafe { libc::sysconf(libc::_SC_NPROCESSORS_ONLN) } {
4 collapsed lines
-1 => Err(io::Error::last_os_error()),
0 => Err(io::Error::UNKNOWN_THREAD_COUNT),
cpus => {
let count = cpus as usize;
let count = count.min(quota);
Ok(unsafe { NonZero::new_unchecked(count) })
4 collapsed lines
}
}
}
}
}

Over? Nope, still a little. I found this option in the configuration:

ff6695ba589656acef68da534b4af55dda452c80::core/configs/server.toml::line::500-505
# Size of the memory pool (string).
# Example: "512 MiB" or "1 GiB".
# This defines the maximum, total memory allocated for the memory pool.
# Note: This number has to be multiplication of 4096 (default linux page size).
# Minimum size is 512 MiB due to internal implementation details.
size = "4 GiB"

Reducing the memory pool size (e.g., from 4 GiB to 1 GiB) lowers the initial pre-allocation peak and decreases the total instantaneous memory demand. This value may fall within the kernel’s “repayable” memory range, allowing the server to start occasionally.

However, this only alleviates the symptoms and does not address the root cause. Physical cores and SMT threads are not distinguished, nor is there a limit on the maximum number of shard. Ultimately, this amplifies memory pressure to the point of triggering an OOM error on SMT enabled multi cores CPU.

When the First Time I Encountered This

Of course, I didn’t know any of this at the beginning of the pull request, nor did I realize it was a NUMA issue. I was fixated entirely on the memory side and was just about to file an issue. After fetching, the problem was suddenly resolved. With some locates, let’s take a look at the commit that fixed the issue.

addressing #2387, feat(server): NUMA awareness (#2412), committed by @tungtose
git show a5d569450ee34441be997786046c7a30785e11f2 --stat

numa_flow.webp

Links to the Original Issue and Pull Request

The issue was handled by @tungtose , and the image above was the flow he drew. You can find the original issue #2387 here, filed by @hubcio (Hubert Gruszecki) , and the original pull request here #2412. Merged by @spetz (Piotr Gankiewicz) on Dec 15, 2025.

Cargo.toml
# line 135
hwlocality = "1.0.0-alpha.11"
core/server/src/configs/sharding.rs
// line 50
#[derive(Debug, Clone, PartialEq, Default)]
pub struct NumaConfig {
3 collapsed lines
pub nodes: Vec<usize>,
pub cores_per_node: usize,
pub avoid_hyperthread: bool,
}
// line 178
#[derive(Debug)]
pub struct NumaTopology {
4 collapsed lines
topology: Topology,
node_count: usize,
physical_cores_per_node: Vec<usize>,
logical_cores_per_node: Vec<usize>,
}
core/server/src/configs/sharding.rs
impl ShardInfo {
pub fn bind_memory(&self) -> Result<(), ServerError> {
if let Some(node_id) = self.numa_node {
let topology = Topology::new().map_err(|err| ServerError::TopologyDetection {
3 collapsed lines
msg: err.to_string(),
})?;
let node = topology
.objects_with_type(ObjectType::NUMANode)
.nth(node_id)
5 collapsed lines
.ok_or(ServerError::InvalidNode {
requested: node_id,
available: topology.objects_with_type(ObjectType::NUMANode).count(),
})?;
if let Some(nodeset) = node.nodeset() {
topology
.bind_memory(
15 collapsed lines
nodeset,
MemoryBindingPolicy::Bind,
MemoryBindingFlags::THREAD | MemoryBindingFlags::STRICT,
)
.map_err(|err| {
tracing::error!("Failed to bind memory {:?}", err);
ServerError::BindingFailed
})?;
info!("Memory bound to NUMA node {node_id}");
}
}
Ok(())
}
}

Although I don’t understand it many17, it’s much better than the indiscriminate thread allocation based on simple CPU sets from available_parallelism(). And even more delightful is that as a newcomer, I can finally write code happily.

Tungtose
@tungtose. No description provided.

Wait, Some Compiling Dependencies Missing#

It turns out that hwloc-devel was missing, maybe can infer that the bug has truly been fixed XD.

But something still doesn’t seem right?

4 collapsed lines
thread 'main' (948800) panicked at /home/svecco/.cargo/registry/src/mirrors.tuna.tsinghua.edu.cn-4dc01642fd091eda/hwlocality-sys-0.6.4/build.rs:82:10:
Could not find a suitable version of hwloc:
pkg-config exited with status code 1
> PKG_CONFIG_ALLOW_SYSTEM_LIBS=1 PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 pkg-config --static --libs --cflags hwloc 'hwloc >= 2.0.0' 'hwloc < 3.0.0'
The system library `hwloc` required by crate `hwlocality-sys` was not found.
The file `hwloc.pc` needs to be installed and the PKG_CONFIG_PATH environment variable must contain its parent directory.
5 collapsed lines
The PKG_CONFIG_PATH environment variable is not set.
HINT: if you have installed the library, try setting PKG_CONFIG_PATH to the directory containing `hwloc.pc`.
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Of course, we can instantly solve this problem by installing hwloc-devel package via dnf:

3 collapsed lines
Updating and loading repositories:
Repositories loaded.
Package Arch Version Repository Size
Installing:
hwloc-devel x86_64 2.12.0-2.fc43 fedora 665.3 KiB
22 collapsed lines
Installing dependencies:
infiniband-diags x86_64 58.0-4.fc43 fedora 961.8 KiB
libibumad x86_64 58.0-4.fc43 fedora 43.9 KiB
rdma-core-devel x86_64 58.0-4.fc43 fedora 613.8 KiB
Transaction Summary:
Installing: 4 packages
Total size of inbound packages is 1 MiB. Need to download 1 MiB.
After this operation, 2 MiB extra will be used (install 2 MiB, remove 0 B).
Is this ok [y/N]: y
[1/4] infiniband-diags-0:58.0-4.fc43.x86 100% | 611.2 KiB/s | 328.2 KiB | 00m01s
[2/4] hwloc-devel-0:2.12.0-2.fc43.x86_64 100% | 664.9 KiB/s | 361.0 KiB | 00m01s
[3/4] rdma-core-devel-0:58.0-4.fc43.x86_ 100% | 712.8 KiB/s | 429.8 KiB | 00m01s
[4/4] libibumad-0:58.0-4.fc43.x86_64 100% | 372.5 KiB/s | 27.2 KiB | 00m00s
---------------------------------------------------------------------------------
[4/4] Total 100% | 1.8 MiB/s | 1.1 MiB | 00m01s
Running transaction
[1/6] Verify package files 100% | 1.3 KiB/s | 4.0 B | 00m00s
[2/6] Prepare transaction 100% | 19.0 B/s | 4.0 B | 00m00s
[3/6] Installing libibumad-0:58.0-4.fc43 100% | 2.3 MiB/s | 44.7 KiB | 00m00s
[4/6] Installing infiniband-diags-0:58.0 100% | 56.2 MiB/s | 978.7 KiB | 00m00s
[5/6] Installing rdma-core-devel-0:58.0- 100% | 41.7 MiB/s | 682.7 KiB | 00m00s
[6/6] Installing hwloc-devel-0:2.12.0-2. 100% | 2.7 MiB/s | 747.8 KiB | 00m00s
Complete!

But why does the installation of hwloc still fail when using Nix? Go have some tries.

nixos-version: 25.11.20251215.c6f52eb (Xantusia)
hwloc-info --version: hwloc-info 2.12.2rc1-git
ls -la /nix/store/{LONG_HASH}-hwloc-2.12.2-dev/lib/pkgconfig/ | grep 'hwloc.pc'
.r--r--r-- 439 root 1 Jan 1970 hwloc.pc

Hmm… that’s really puzzling. But there’s a somewhat hacky way to fix it.

PKG_CONFIG_PATH=/nix/store/{LONG_HASH}-hwloc-2.12.2-dev/lib/pkgconfig/ cargo build 2>&1
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.33s

Let’s look at the source code. Ah, if hwloc.pc is in the dev of the nixpkgs, there will be multiple outputs… So I can’t rely on installing it correctly by just adding the package name directly.

nixpkgs/pkgs/by-name/hw/hwloc/package.nix:74-80
outputs = [ "out" "lib" "dev" "doc" "man" ];
# "out" was the default output. Below was my installation:
packages = with pkgs; [ hwloc ];

IEC != SI#

The picture below shows the textbook for Information Technology in Chinese senior high.

wrong_textbook.webp

But IT IS WRONG18. The correct one is following IEC 60027-219, which looks like this20.

Units Computers UseIECValueUnits Hardware Vendors UseSIValue
KibibyteKiB1024¹KilobytekB1000¹
MebibyteMiB1024²MegabyteMB1000²
GibibyteGiB1024³GigabyteGB1000³
TebibyteTiB1024⁴TerabyteTB1000⁴
PebibytePiB1024⁵PetabytePB1000⁵
ExbibyteEiB1024⁶ExabyteEB1000⁶
ZebibyteZiB1024⁷ZettabyteZB1000⁷
YobibyteYiB1024⁸YottabyteYB1000⁸

I didn’t know any of this at first, until started with the Apache Iggy, @Hubcio pointed it out to me.

Fake code, just for exhibiting.
assert_eq!( return_bytes(1 GiB) , 1000000000 );
/* Panicked. Left: 1073741824 , Right: 1000000000 */

Are You Sure#

Alright, I’m not sure really. I’ll take a closer look before --force it next time. If I don’t have enough time, just test it later, also learn to write your own test scenarios. Don’t always ask others to test features, it’s not a good practice, and usually wastes maintainers’ time.

Now, let’s see what silly things this guy @Svecco did.

1/8 core/server/src/log/logger.rs: Please import rather than use solute path each time.
if let Err(e) = std::fs::create_dir_all(&logs_path) {
tracing::warn!("Failed to create logs directory {:?}: {}", logs_path, e);
2/8 core/server/src/log/logger.rs: All the magic figures, no matter how simple, please const
// Check available disk space, at least 10MiB
let min_disk_space: u64 = 10 * 1024 * 1024;
3/8 core/server/src/log/logger.rs: What are you doing?
let max_files = Self::calculate_max_files(
config.max_size.as_bytes_u64(),
config.max_size.as_bytes_u64(),
4/8 core/server/src/log/logger.rs: Do not add mutex, you even not sure really a thread issue.
let path = logs_path.to_path_buf();
let max_size = config.max_size.as_bytes_u64();
let retention = config.retention.get_duration();
let rotation_mutex = Arc::new(Mutex::new(()));
5/8 core/server/src/configs/system.rs: Can you just copy? Which might even better.
fn default_max_total_log_size() -> IggyByteSize { IggyByteSize::from(4_000_000_000) }
fn default_log_rotation_check_interval() -> u64 { 3600 }
6/8 core/server/src/log/logger.rs: 'How are you preventing active log file deletion?'
fn install_log_rotation_handler(&self, config: &LoggingConfig, logs_path: Option<&PathBuf>) { ... }
7/8 core/server/src/log/logger.rs: 'Use IggyDuration/IggyByteSize when you can :)'
fn cleanup_log_files(logs_path: &PathBuf, retention: Duration, max_size_bytes: u64) {...}
8/8 core/integration/tests/server/scenarios/log_rotation_scenario.rs: Is 'panick' in this way?
match rotation_result {
Ok(()) => println!("Succeeded Verified Log Rotation"),
Err(e) => { eprintln!("Failed: {e}"); }
}

I don’t know if staying up late frequently is one of the reasons for these outrageous problems.
But for now, it just seems like an excuse for not being good enough.

Btw, according to Programming Rust , when there are multiple line macro calls, rem comma. 21

Template Multi Lines Call
reg_set_bits!(
GPIO_CTRL_REG, /*
0x01, * Typically, it refers to multi lines
0x04, * macro invocations with a variable
0x10 * number of arguments.
0x10, * Go Add Trailing Comma.
)?; */

Make a Clean Git Log#

Judging from the current git log of the sve.moe site’s refactoring:

git log --oneline: Before && After
5f5d99c arst | 4b5b2a9 fix(style): err rendered btn and refine horizonal lines
e46e4af qwfp | 211112c fix(name): officiallly, NixOS CN Meetup -> NixCN Conference
221db94 neio | 42d9a53 fix(render): remove duplicate blur to reduce graphics load (#12)
eb408c2 ?? | 83d643e fix(archives): which to keep, which ... carry on. (#11)
b81821e xcdv | 34ca32e feat(sve): some() => render.presure().alleviate(bit) (#10)
4db971b ababa | 65f1d68 feat(api): impl data cache for github api pulls
8387e24 OvO | ed70a3e feat(fuwari): many personalizations
1287507 luyj | 3df9f6f init(fuwari): by svecco on sve.moe

It’s true that NO ONE has required me to use commit messages like this for my own tiny git repos, but I just can’t help it. It must be because been domesticated by Apache.

General Message Format that Applies Conventional Commits
commit d98176a36a565ec3a47a6a5e52869a2dd6cc36c4
Author: Piotr Gankiewicz <piotr.gankiewicz@gmail.com>
Date: Thu Feb 5 15:12:00 2026 +0100
"
fix(server): memory leak in segment rotation (#2686)
Segment rotation accumulated memory indefinitely because sealed segments
7 collapsed lines
retained their 16MB index buffers and kept file writers open. Under
heavy write load with frequent rotations, this caused memory to balloon
from ~100MB to 20GB+.
The fix clears index buffers for sealed segments (when cache_indexes !=
All) and closes their writers immediately after sealing. Writers are
never needed post-seal, and this also releases io_uring/kernel file
handle resources.
"

Mysterious Bug#

fish $ cargo build --all-targets --release --workspace --all-features
warning: Invalid record ( Producer: 'LLVM21.1.3-rust-1.92.0-stable'
Reader : 'LLVM 21.1.3-rust-1.92.0-stable')
5 collapsed lines
error: failed to load bitcode of module "server-f3f5ee30feef62f6.server.74b6934297a2875c-cgu.0.rcgu.o":
warning: `bdd` (test "basic_messaging") generated 1 warning
error: could not compile `bdd` (test "basic_messaging") due to 1 previous error; 1 warning emitted
warning: build failed, waiting for other jobs to finish...

Is there a difference in the space?


CI/CD Workflow#

GitHub Actions#

According to the public standards on GitHub, I initially thought ASF’s CI was paid by them22, because the project is large, there are many compilations, and community are very active.

About Clippy

The uniqueness of a toolchain is jointly identified by its version && hash. Different hashes are regarded as different toolchains.

However, initially, due to system environment issues, the Clippy check could not be run. Since the issue can only be reproduced on NixOS… … and cannot be reproduced on Fedora or Gentoo right now, let’s use the Clippy source code to replace.

src/cargo/core/compiler/build_context/target_info.rs::compare_rustc_versions
pub fn compare_rustc_versions(
&self,
current: &RustcInfo,
artifact: &RustcInfo
) -> VersionCompatibility {
if current.commit_hash == artifact.commit_hash && !current.commit_hash.is_empty() {
return VersionCompatibility::ExactMatch;
}
14 collapsed lines
if current.version == artifact.version {
return VersionCompatibility::VersionMatch;
}
let current_parts: Vec<&str> = current.version.split('.').collect();
let artifact_parts: Vec<&str> = artifact.version.split('.').collect();
if current_parts.len() >= 2 && artifact_parts.len() >= 2 {
if current_parts[0] == artifact_parts[0] && current_parts[1] == artifact_parts[1] {
return VersionCompatibility::MinorVersionMatch;
}
}
VersionCompatibility::Different
}
The Clippy Output Before Looks Around This
Detected a version mismatch:
The Rust compiler (rustc) version 1.92.0 (3df9f6f) used during
the build process does not match the rustc 1.92.0 (ed70a3e) used
to build the compiled artifact.
Aborted with 1 error. Clippy check failed.

Obviously, I don’t really want to mess with it, just force it first and see. So this happens often:

ci_failed.webp

Then I felt that something didn’t right, so I check the GitHub Actions Manual, and get this:

Operating systemBilling SKUPer-minute rate (USD)
Linux 1-core (x64)actions_linux_slim$0.002
Linux 2-core (x64)actions_linux$0.006
Linux 2-core (arm64)actions_linux_arm$0.005
Windows 2-core (x64)actions_windows$0.010
Windows 2-core (arm64)actions_windows_arm$0.010
macOS 3-core or 4-core (M1 or Intel)actions_macos$0.062
.github/workflows/*
Permissions Size User Date Modified Name
.rw-r--r--@ 9.4k svecco 13 Feb 21:47 _build_python_wheels.yml
10 collapsed lines
.rw-r--r--@ 11k svecco 13 Feb 21:47 _build_rust_artifacts.yml
.rw-r--r--@ 13k svecco 13 Feb 21:47 _common.yml
.rw-r--r--@ 17k svecco 13 Feb 21:47 _detect.yml
.rw-r--r--@ 7.8k svecco 13 Feb 21:47 _publish_rust_crates.yml
.rw-r--r--@ 4.6k svecco 13 Feb 21:47 _test.yml
.rw-r--r--@ 4.2k svecco 13 Feb 21:47 _test_bdd.yml
.rw-r--r--@ 6.6k svecco 13 Feb 21:47 _test_examples.yml
.rw-r--r--@ 10k svecco 13 Feb 21:47 post-merge.yml
.rw-r--r--@ 17k svecco 13 Feb 21:47 pre-merge.yml
.rw-r--r--@ 52k svecco 13 Feb 21:47 publish.yml
.rw-r--r--@ 2.0k svecco 13 Feb 21:47 stale-prs.yml

Oh my, the cost would be quite high when the workload is heavy. Although just 2 synchronized by me, there were many others running, too. Anyway, it would be better to mention it to @spetz. I’ve had a fever for the few days at the beginning of 2026 and haven’t been able to work, better nothing serious happens, or I’ll just have to watch helplessly.

scrst.webp

Fortunately. Well, although it’s true23 that Microsoft a sponsor to The ASF, 24 what matters more is this line from @spetz after I asked:

Thank you for all the changes, and dont worry about CI.

platinum.webp

Why Not Container?#

My local computing power is just sitting idle anyway, so I may as well put it to use.

Local CI/CD

Since a powerful computer cluster can be shared, running CI/CD on the cloud is generally the recommended choice. As for doing this locally, it seems rather controversial, as it competes for computing power with subsequent development work.

nektos
/
act
Waiting for api.github.com...
00K
0K
0K
Waiting...

Act is an open source project that provides a way to run CI/CD locally, written in Golang.

Container Option

Act uses Docker to run the CI/CD workflow, which is the default, offering 3 image sizes: micro, medium, and large. In most cases, medium is sufficient. The large version can fully replicate the GitHub CI environment, comes pre-installed with all the toolchains included in the GitHub Actions runner, and can take up nearly 70 GB after decompression.

doas docker images catthehacker/ubuntu:full-latest
IMAGE ID DISK USAGE CONTENT SIZE EXTRA
catthehacker/ubuntu:full-latest 9327301aea27 67.2 GB 16.8 GB
GitHub RunnerMicro Docker ImageMedium Docker ImageLarge Docker Image
ubuntu-latestnode:16-buster-slimcatthehacker/ubuntucatthehacker/ubuntu
ubuntu-22.04node:16-bullseye-slimcatthehacker/ubuntu.04catthehacker/ubuntu.04
ubuntu-20.04node:16-buster-slimcatthehacker/ubuntu.04catthehacker/ubuntu.04
ubuntu-18.04node:16-buster-slimcatthehacker/ubuntu.04catthehacker/ubuntu.04
Always Failed?

Even if some is installed, it may still fail to run, because GitHub Actions runs on fully virtualized machines, while act is based on Docker containers. The former is with the OOM Killer disabled and no seccomp/apparmor security restrictions, it can be more convenient.25

Install Act with Wget
# ***wget***
wget -qO- https://github.com/nektos/act/releases/download/v0.2.84/act_Linux_x86_64.tar.gz | tar xvz
mv ./act /usr/local/bin/act && act --version # No need use path to run
Install Act with Go
# Golang required
git clone https://github.com/nektos/act.git && cd act && go install

However, pulling the full image directly doesn’t seem to work either. This is because GitHub maintains its own official images, while the full version here does not include the Rust toolchain, which causes failures, can not find the rust toolchain or something.

After Some Probing I Inferred These Attributes
# For convenience, Docker will be used directly for demonstration below.
doas docker pull catthehacker/ubuntu:rust-latest # Fetch this
doas act -P ubuntu-latest=catthehacker/ubuntu:rust-latest \
--platform linux/amd64 \
--container-architecture linux/amd64 \
--pull=false \
--defaultbranch=master \
--container-cap-add SYS_ADMIN \
--container-cap-add IPC_LOCK \
--container-options "--security-opt seccomp=unconfined --security-opt apparmor=unconfined -v /dev/shm:/dev/shm --privileged" \
--verbose

Yay, it compiles! But after a while:

.github/workflows: cargo workflow testing, 17 failed
| Summary [ 111.807s] 1250 tests run: 1233 passed, 17 failed, 96 skipped
46 collapsed lines
| FAIL [ 0.364s] ( 784/1250) integration::mod cli::personal_access_token::
| test_pat_login_options::should_be_successful
| FAIL [ 0.199s] ( 814/1250) integration::mod cli::system::
| test_cli_session_scenario::should_be_successful
| FAIL [ 26.639s] (1129/1250) integration::mod connectors::http_config_provider::
| direct_responses::sink_active_config_returns_
| current_version
| FAIL [ 26.551s] (1130/1250) integration::mod connectors::http_config_provider::
| direct_responses::sink_configs_list_returns_all_
| versions
| FAIL [ 27.207s] (1140/1250) integration::mod connectors::http_config_provider::
| direct_responses::source_active_config_returns_
| current_version
| FAIL [ 27.525s] (1141/1250) integration::mod connectors::http_config_provider::
| direct_responses::sink_config_by_version_returns_
| specific_version
| FAIL [ 28.282s] (1154/1250) integration::mod connectors::http_config_provider::
| wrapped_responses::sink_active_config_returns_
| current_version
| FAIL [ 28.362s] (1156/1250) integration::mod connectors::http_config_provider::
| direct_responses::source_config_by_version_returns_
| specific_version
| FAIL [ 7.141s] (1159/1250) integration::mod server::message_cleanup::
| message_cleanup::expiry_multipartition_expects
| FAIL [ 28.521s] (1163/1250) integration::mod connectors::http_config_provider::
| wrapped_responses::sink_configs_list_returns_all_
| versions
| FAIL [ 28.885s] (1168/1250) integration::mod connectors::http_config_provider::
| direct_responses::source_configs_list_returns_all_
| versions
| FAIL [ 28.930s] (1170/1250) integration::mod connectors::http_config_provider::
| wrapped_responses::sink_config_by_version_returns_
| specific_version
| FAIL [ 28.913s] (1188/1250) integration::mod connectors::http_config_provider::
| wrapped_responses::source_config_by_version_returns
| _specific_version
| FAIL [ 29.320s] (1236/1250) integration::mod connectors::http_config_provider::
| wrapped_responses::source_active_config_returns_
| current_version
| FAIL [ 29.434s] (1237/1250) integration::mod connectors::http_config_provider::
| wrapped_responses::source_configs_list_returns_all_
| versions
| FAIL [ 106.101s] (1249/1250) integration::mod connectors::quickwit::quickwit_
| sink::given_bulk_message_send_should_store
| FAIL [ 109.062s] (1250/1250) integration::mod connectors::quickwit::quickwit_
| sink::given_existent_quickwit_index_should_store
| error: test run failed
.github/workflows/_build_rust_artifacts.yml
- target: x86_64-unknown-linux-gnu # As the arch above, linux/amd64
runner: ubuntu-latest

Judging from the logs, everything points to one thing:

Some of the failed tracebacks
failed to start test harness: Health check failed for iggy-connectors at http://127.0.0.1:35235 after 1000 retries
failed to start test harness: Health check failed for iggy-connectors at http://127.0.0.1:40443 after 1000 retries
failed to start test harness: Health check failed for iggy-connectors at http://127.0.0.1:39753 after 1000 retries
Received an invalid HTTP response when ingesting messages for index: test_topic. Status code: 404 Not Found, reason: { "message": "index test_topic not found" }
search: InvalidState{message:"Expected 13 documents but got 0 after 100 poll attempts"}
doas docker network ls
doas (svecco@orion) password:
NETWORK ID NAME DRIVER SCOPE
8462db97170e bridge bridge local
31ea411fde48 host host local
5cefc05182e2 iggy-quickwit-sink-4bd7b25b-36fb-4b04-97f1-ce11755d4322 bridge local
1f577e667a52 none null local
Cleaning the Remaining Source
doas docker ps -a | grep iggy-quickwit | awk '{print $1}' | xargs -r doas docker rm -f
doas docker network rm iggy-quickwit-sink-4bd7b25b-36fb-4b04-97f1-ce11755d4322
docker container prune -f && docker network prune -f # Later I did.

Uh? DinD? We may need --net=host attribute.

About Docker-in-Docker

Enables Docker-in-Docker container nesting capability, delivering a clean, isolated solution for containerized CI/CD pipelines, automated image building, environment replication and other container-native workflows, while avoiding dependency conflicts with the host environment.26

About quickwit-sink

quickwit-sink is a core delivery component of the cloud-native containerized logging stack, which streamlines unified log forwarding, indexing and persistence to Quickwit, with out-of-the-box compatibility and lighter configuration overhead for containerized environments.27

You May Have to Edit the Configs Under .github/ to Disable Some Tests
For Features Like CodeCoverage, You May Need a Token for Authentication

When act runs third-party Actions from the GitHub Marketplace (e.g., codecov-action). codecov-action requires a CODECOV_TOKEN (generated from the Codecov platform) to upload coverage reports. A GitHub Personal Access Token (PAT) is optionally needed for GitHub API authentication to reduce the risk of rate limiting, not mandatory for basic functionality. You can generate a GitHub PAT here: GitHub Personal Access Tokens. You can pass secrets to act by using: --secret-file <path/to/token/file>, e.g., add CODECOV_TOKEN=xxx or GITHUB_TOKEN=xxx.

GoodBoyOneMinBoy{\large \mathbf{GoodBoy \Longleftrightarrow OneMinBoy♂}}
My Honey.
────────────
Summary [ 145.190s] 1454 tests run: 1454 passed (7 slow), 96 skipped
Notice: Tests executed in 161s (02:41)
=========================================
All targets build: 17s (00:17)
Tests compile: 30s (00:30)
Tests execute: 161s (02:41)
-----------------------------------------
Total build: 47s (00:47)
Total time: 208s (03:28)
=========================================

Others#

I’ve been shown!#

exhi.webp

28Original Text#

I’m thrilled to see this PR merged ^_^

This is my first ever Rust PR, and also my first contribution to be merged into a formal project that’s nearing 4k stars, rather than a package repo or other casual repositories. As a student, I initially feared the long dev and review cycle would be a hassle, but I’m extremely grateful to @hubcio and @spetz for your unparalleled patience in guiding and reviewing my work, who have helped me get up to speed with the workflow of a production ready backend project. I’ve also learned many things else e.g.podman, and many exp that I’d never get in school.

Thanks, again :)

release.webp

(Blog of Iggy Release 0.7.0, February 24, 2026)

Footnotes#

  1. The Apache Software Foundation, Apache Software Foundation Celebrates 25 Years, Mar. 25, 2024. [Accessed: Feb. 13, 2026]. [Online].

  2. The Apache Software Foundation, Valuating Code at The Apache Software Foundation, DinoSource, 2022. [Accessed: Feb. 13, 2026]. [Online].

  3. America’s Cyber Defense Agency, Apache Log4j Vulnerability Guidance, ED 22-02, Apr. 08, 2022. [Accessed: Feb. 13, 2026]. [Online].

  4. The Apache Software Foundation, Apache Log4j, Dec. 9, 2021. [Online]. (Log4Shell, CVE-2021-44228: Remote Code Execution (RCE) vulnerability, CVSSv3 10.0/10.0 severity)

  5. CrowdStrike, Remote Code Execution (RCE), [Online].

  6. J. Gardner, Why Java Sucks, Jul. 18, 2017. [Accessed: Feb. 14, 2026]. [Online].

  7. At my worst during this period, I even forgot the name of the PCIe interface. I didn’t know how to use Tmux either. I truly admire some middle school students who can code proficiently and contribute actively to various projects something I couldn’t do as a high school student. It was not until late 2023 to early 2025 that I gradually got back on track with Ubuntu.

  8. While I have created multiple commits locally, while the remote branch has updated some of these commits via a squash merge, I would create a new branch and then cherry-pick the desired commits one by one … Obviously, rebase --onto is fine.

  9. The Iggy Docs, Apache Iggy Documentation, n.d. [Accessed: Feb. 22, 2026]. [Online].

  10. Initially, I aimed to avoid division by zero errors by outright blocking any operation that set the value to zero. Later, I noticed that IggyByteSize actually defines a specific meaning for the zero value, so I revised the logic to treat it as “unlimited” to align with the behavior stated in comments throughout the rest of the codebase.

  11. A. Kotsiolis, Simultaneous Multithreading: Driving Performance and Efficiency on AMD EPYC CPUs, AMD Official Blog, Mar. 3, 2025. [Accessed: Feb. 14, 2026]. [Online].

  12. Rust Team, The Rust Standard Library (std), Commit 01f6ddf75, Feb. 11, 2026. [Accessed: Feb. 14, 2026]. [Online].

  13. J. L. Hennessy & D. A. Patterson, Computer Architecture: A Quantitative Approach (6th Edition), Elsevier, 2017. [Accessed: Feb. 14, 2026]. [Online].

  14. Wikipedia Contributors, Non-uniform memory access, Wikipedia, The Free Encyclopedia, 2026. [Accessed: Feb. 14, 2026]. [Online].

  15. @linux, What is NUMA, and why do we need to understand NUMA?, Zhihu, Linux Server Development Column, Jul. 14, 2023. [Accessed: Feb. 14, 2026]. [Online].

  16. Crate hwloc: Rust Bindings for the Hwloc library. The hwloc library is a rust binding to the hwloc C library, which provides a portable abstraction of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading.

  17. hwlocality Developers, hwlocality: Rust binding for hwloc, [Online]. (Designed for hardware topology detection, NUMA support, and thread binding optimization; secure, easy to use & actively maintained). [Accessed: Feb. 14, 2026].

  18. CGPM, SI, Conférence Générale des Poids et Measures. [Online]. [Accessed: Feb. 14, 2026].

  19. I.E.C., IEC 60027, International Electrotechnical Commission. [Online]. [Accessed: Feb. 14, 2026].

  20. Wikimedia Foundation, Inc. & I.E.C., Wikipedia, Nov. 2023. [Accessed: Feb. 14, 2026].

  21. J. Blandy & J. Orendorff & L. F. S. Tindall, Programming Rust 2nd Edition: Fast, Safe Systems Development, Sebastopol, CA: O’Reilly Media Inc., 2021. [Accessed: Feb. 14, 2026]. [Online].

  22. GitHub Inc, GitHub Actions billing, Docs, Jan. 13, 2026. [Accessed: Feb. 14, 2026]. [Online].

  23. The Apache Software Foundation, Our Sponsors, Apache Software Foundation Website, 2025. [Accessed: Feb. 14, 2026]. [Online]. (Sponsors, community, projects, and brand information).

  24. The Apache Software Foundation, Our Sponsorship Program, Apache Software Foundation Website, 2025. [Accessed: Feb. 14, 2026]. [Online]. (Sponsors, community, projects, and brand information).

  25. Act, Runners, Act User Guide, 2026. [Accessed: Feb. 14, 2026]. [Online].

  26. Meta Inc., What is ‘DinD’? Llama 4 Scout, Feb. 14, 2025. [Accessed: Feb. 14, 2025].

  27. Meta Inc., What is quickwit-sink? Llama 4 Scout, Feb. 14, 2025. [Accessed: Feb. 14, 2025].

  28. @Svecco, Code 3858794990 Comments, GitHub, Apache Iggy Pull Request #2452, Feb. 6, 2026. [Accessed: Feb. 6, 2025]. [Online].

Embark: Apache Iggy
https://sve.moe/posts/2511/2452/
Author
Svecco
Published at
2026-02-20
License
CC BY-NC-SA 4.0