Chainyq provides fast, ergonomic queues, lists, iterators and lazy sequences for Go, with a focus on high and predictable performance. The API offers rich functionality inspired by collections in Rust and Java, while prioritizing memory control, static dispatch and deliberately avoiding errors and panics to reduce complexity and overhead. Written in Go 1.25 with zero dependencies.
Deque[T any]- a highly optimized cache-friendly segmented array deque:- Full memory control with
Resize,ResizeFront,ResizeBack,EnsureFront,EnsureBack,ShrinkToFitand optional configurable pooling - Fast, O(1) end operations and random access
- 3 types of iterators tailored to the implementation
- Likely the fastest general-purpose deque you'll find
- Full memory control with
List[T any]- a simple doubly-linked list- Fast, O(1) insertion and deletion anywhere, for cases when you need to do it frequently
- ~1.5x faster than
container.List - 3 types of iterators that can mutate the list
Stack[T any]- a slice-based stack- Fast, O(1)
Push,PopandPeek - Top-down iterator
- Fast, O(1)
Seq[T any]- lightweight lazy sequence for functional transformationsFilter,Map,FlatMap,Reduce,ForEach,ToSlice,ToMap,GroupBy, etc.- Can be created from iterators, slices, and just functions
- Supports lazy infinite sequences
SyncDeque[T any]- synchronized wrapper aroundDeque[T]
Seq[T] is the only type with dynamically dispatched operations,
because a function pointer is needed to enable laziness.
General interfaces are available in the root package (chainyq) with implementation-specific interfaces located in their respective packages.
All methods are designed to not tolerate nil as a receiver to avoid redundant
nil checks - just call New() instead or more specialized constructors
if you need extra customization. Free functions that accept pointers
tolerate nil.
go get github.com/zelr0x/chainyqchainyq.deque.Deque is a double-ended queue and it's fast.
The following benchmarks highlight common workloads for deques. This is an excerpt from
go test -bench=. -benchmem -count=10 > bench.txt
benchstat bench.txt| PushBack | ns/op | B/op | allocs/op |
|---|---|---|---|
| chainyq.Deque | 3.329 ± 11% | 8 ± 0% | 0 |
| chainyq.Deque_EnsureBack | 2.505 ± 3% | 0 ± 0% | 0 |
| edwingeng.Deque | 4.886 ± 4% | 8 ± 0% | 0 |
| gammazero.Deque | 3.928 ± 33% | 14 ± 14% | 0 |
| gammazero.Deque_SetBaseCap | 3.981 ± 9% | 14 ± 7% | 0 |
| chainyq.List | 44.130 ± 7% | 24 ± 0% | 1 |
| container.List | 67.350 ± 13% | 55 ± 0% | 1 |
| PushFront | ns/op | B/op | allocs/op |
|---|---|---|---|
| chainyq.Deque | 3.585 ± 5% | 8.0 ± 0% | 0 |
| chainyq.Deque_EnsureFront | 2.743 ± 1% | 0.0 ± 0% | 0 |
| edwingeng.Deque | 4.526 ± 3% | 8.0 ± 0% | 0 |
| gammazero.Deque | 5.072 ± 22% | 18.0 ± 22% | 0 |
| chainyq.List | 46.920 ± 9% | 24.0 ± 0% | 1 |
| container.List | 72.550 ± 9% | 55.0 ± 0% | 1 |
| BlockBoundaryThrash | ns/op | B/op | allocs/op |
|---|---|---|---|
| chainyq.Deque | 5.124 ± 2% | 0 ± 0% | 0 |
This one is extremely unpleasant for segmented arrays. Notice the memory
| Random churn (int) | ns/op | B/op | allocs/op |
|---|---|---|---|
| chainyq.Deque | 9.74 ± 1% | 0 ± 0% | 0 |
| edwingeng.Deque | 10.10 ± 1% | 0 ± 0% | 0 |
| gammazero.Deque | 10.81 ± 2% | 0 ± 0% | 0 |
| chainyq.List | 18.80 ± 0% | 11 ± 0% | 0 |
| container.List | 26.59 ± 3% | 27 ± 0% | 0 |
| Random churn (big struct) | ns/op | B/op | allocs/op |
|---|---|---|---|
| chainyq.Deque | 14.82 ± 0% | 0 ± 0% | 0 |
| edwingeng.Deque | 15.49 ± 1% | 0 ± 0% | 0 |
| gammazero.Deque | 14.72 ± 1% | 0 ± 0% | 0 |
| Random access (get by index) | ns/op | B/op | allocs/op |
|---|---|---|---|
| chainyq.Deque | 8.90 ± 2% | 0 ± 8% | 0 |
| edwingeng.Deque | 2761.00 ± 7% | 0 ± 8% | 0 |
| gammazero.Deque | 11.22 ± 42% | 0 ± 8% | 0 |
Ring buffer should win here, and it does sometimes. Not this time though
| Bursts of 100k writes/10k reads | ns/op | B/op | allocs/op |
|---|---|---|---|
| chainyq.Deque | 19.12 ± 5% | 0 ± 0% | 0 |
| edwingeng.Deque | 20.05 ± 4% | 0 ± 0% | 0 |
| gammazero.Deque | 29.98 ± 5% | 2 ± 0% | 0 |
Segmented array should win here
| Sliding window | ns/op | B/op | allocs/op |
|---|---|---|---|
| chainyq.Deque | 18.23 ± 7% | 0 ± 0% | 0 |
| edwingeng.Deque | 21.62 ± 6% | 0 ± 0% | 0 |
| gammazero.Deque | 17.31 ± 6% | 0 ± 0% | 0 |
Ring buffer should win here, but the gain is minimal
Across all benchmarks, chainyq.Deque has delivered excellent performance:
- It matches the fastest ring buffer implementations even on workloads
that traditionally favor them. Random access and sliding window benchmarks
demonstrate this clearly: a segmented array can't beat a good ring buffer
there, yet
chainyq.Dequekeeps up. - In bursty growth scenarios, it is the clear winner.
- It remains unaffected by workloads tailored to hurt segmented arrays, such as block boundary thrashing.
- Overall,
chainyq.Dequeis not only extremely fast but also stable across runs, with very low variance and consistent 0 allocs/op and ~8 B/op memory usage (0 B/op with ensure).
import (
"github.com/zelr0x/chainyq/deque"
)
func main() {
d := deque.FromSlice([]int{2, 4, 8, 16})
v, _ := d.PopBack() // 16, true
d.PushFront(1)
if v, ok := d.PopFront(); ok { // v = 1
fmt.Printf("popped from front: %v\n", v)
}
d.ToSlice() // [2 4 8]
d.Iter().Skip(1).ForEachPtr(func(x *int) bool {
*x *= 10
return true // Can stop early here
})
d.ToSlice() // [2 40 80]
}chainyq.deque package provides constructors New/NewPooled/WithCfg/NewValue/FromSlice/FromSliceCfg and helpers SuggestBlockSize/Len.
The API of the deque itself consists of:
- General collection operations:
Len,IsEmpty - Peeks:
Front,FrontPtr,Back,BackPtr - Mutations:
PushFront,PushBack,PopFront,PopBack - Random access operations:
Get,GetPtr,Set - Memory control:
Reserve,ReserveFront,ReserveBack,EnsureFront,EnsureBack,ShrinkToFit,Clear,ClearRelease - Iterators:
Iter,RevIter,BidiIter - Misc:
String,GoString,Equals,Clone - Slicing:
Slice,PtrSlice - Collecting:
ToSlice,ToPtrSlice,ToChan,ToPtrChan
See docs for the full API.
Synchronized version of the Deque is available as chainyq.deque.syncdeque.SyncDeque.
list.List is a doubly-linked list that can do all the operations you expect from a linked list and more,
giving you full control without having to manipulate individual nodes and risking making a subtle mistake.
The list is not safe for concurrent use.
import (
"github.com/zelr0x/chainyq/list"
)
func main() {
l := list.New[int]()
l.Add(2).Add(4)
l.PushBack(8)
l.PushFront(1)
l.String() // -> List[1, 2, 4, 8]
l = list.FromSlice[int]([]int{1, 2, 4, 8, 4, 2, 1})
eq := func(a, b int) bool { return a == b }
idx := l.IndexOf(2, eq) // -> 1
idx = l.LastIndexOf(2, eq) // -> 5
idx = l.IndexOf(65535, eq) // -> -1
}The iterators provide statically dispatched traversal and some operations
on the not-yet-traversed items.
They are optimized for every particular data structure - for example,
Skip for the List will traverse individual nodes one by one,
but for the Deque it will just calculate the block and offset and jump directly to it.
The iterators also take into account the semantics of each data structure:
Stackonly has a top-to-bottom iteratorDequeandListhave three types of iterators -Iter,RevIterandBidiIter.
- Per-item traversal:
HasNext,Next,NextPtr,Peek,PeekPtr - Skipping:
Skip,SkipWhile,SkipWhilePtr - Position reset:
Reset - Actions on remaining items:
ForEach,ForEachPtr - Collecting remaining items:
ToChan,ToPtrChan,TakeSlice,TakePtrSlice,TakeWhile,TakeWhilePtr - Conversions:
Clone,Seq,PtrSeq
BidiIter also has:
- Per-item traversal:
HasPrev,Prev,PrevPtr,PeekBack,PeekBackPtr - Skipping:
SkipBack - Position reset:
ResetBack - Conversions:
RevSeq,RevPtrSeq
Iter and RevIter can be converted to iter.Seq (range syntax)
using IterAll or IterAllPtr.
In bidirectional data structures, Iter and RevIter can be converted to
the opposite direction with Rev and to BidiIter with Bidi.
Methods on BidiIter have the same traversal direction as Iter, except for special
reverse-direction methods that exist only on BidiIter.
list.List iterators also have methods for item insertion and removal
like InsertBefore, InsertAfter, Remove and DrainWhile.
import (
"github.com/zelr0x/chainyq/list"
)
func main() {
l = list.FromSlice[int]([]int{1, 2, 4, 8, 4, 2, 1})
l.Iter().Skip(2).ForEachPtr(func(v *int) bool {
*v *= 2
return true
})
l.String() // -> List[1 2 8 16 8 4 2]
it := l.BidiIter()
v, ok := it.Prev() // -> 0, false
v, ok = it.Peek() // -> 1, true
v, ok = it.Next() // -> 1, true
ok = it.InsertBefore(55) // -> true
v, ok = it.Prev() // -> 55, true
v, ok = it.Remove() // -> 55, true
v, ok = it.Next() // -> 1, true
v, ok = it.PeekBack() // 0, false
l.String() // -> List[1 2 8 16 8 4 2]
v, ok = it.ResetBack().PrevPtr() // -> *int pointing to 2, ok
*v += 98
ok = it.InsertAfter(42)
l.String() // -> List[1 2 8 16 8 4 100 42]
it.Reset() // -> move to head
SkipWhile(func(v int) bool {
return v < 16
}). // it.Next() would return 16 here
SkipBack(1). // now it.Next() would return 8
DrainWhile(func(v int) bool {
return v != 4 // skip over [8 16 8]
})
slice1 := l.ToSlice() // [1 2 4 100 42]
slice2 := it.Reset().Skip(1).TakeSlice(3) // [2 4 100]
}See more in the documentation.
stack.Stack is a slice-based stack.
import (
"github.com/zelr0x/chainyq/list"
)
func main() {
s := stack.New[int]()
s.Push(1)
s.Push(2)
if v, ok := s.PeekPtr(); ok {
*v = 50
}
s.ToSlice() // [50 2 1]
s.UnwrapCopy() // [1 2 50] (new slice)
s.UnwrapUnsafe() // [1 2 50] (access underlying slice directly)
}seq.Seq is a lightweight lazy, composable sequence abstraction.
It wraps a generator function func() (T, bool) and exposes a fluent API
for filtering, transforming, grouping, and collecting values. Unlike eager
collections, Seq only computes elements on demand, making it efficient for
pipelines and transformations over potentially large or infinite sources.
Seq is heavily inspired by functional programming but it diverges from
some implementations in that Seq does not bring a plethora of special types
that do the heavy lifting - Seq is a just wrapper around a closure that
pulls the next item from a source.
import (
"github.com/zelr0x/chainyq/seq"
)
func main() {
slice := []string{"apple", "apricot", "banana", "cake"}
s := seq.FromSlice(slice).Filter(func(s string) bool {
return s[0] != 'c'
})
m := seq.GroupBy(s, func(s string) string {
return string(s[0])
})
m // {"a": ["apple", "apricot"], "b": "banana"}
}Seq can be created from a function with seq.New,
from a slice with seq.FromSlice, or from a channel with seq.FromChan.
A nil-safe zero value for Seq can be created with seq.Empty[T]().
If exact number of items in the source is known, the sequence should be
created using seq.ExactSized. If exact number of items is not known, but
correct upper bound on size is known, the sequence should be created with
seq.Sized. Those are not just hints but guarantees about size given
to the Seq at construction time. All operations on Seq will work
with those to optimize allocations during collection and other operations.
If the sequence is created from a collection that has a fast way to skip items,
WithSkipped can be called on a newly created sequence to provide fast skip
implementation. This way Skip and Slice will use the provided implementation
directly.
All operations on sequences can be divided into two subcategories:
- Intermediate - the ones returning
Seq - Terminating - the ones returning any other type.
-
Nextconsumes the next item from the sequence and all upstream sequences. -
SkipandSkipWhilereturn a newSeq[T], starting from the next element after the skipped ones. -
TakeandTakeWhilereturn a newSeq[T]with eithernfirst items or until the first predicate mismatch respectively. The upstream sequence is consumed only to the point required forSkipandSkipWhile:upstream.Next()afterTake(n)will yieldn+1th item, and afterTakeWhile(pred)it will yield the item immediately following the first item that didn't satisfy the predicate. -
Sliceprovides a safe way to slice the sequence. It returns an empty sequence if any of the arguments is invalid. -
Filtercreates a new sequence that lazily pulls items from upstream until it finds the one that satisfies the predicate and returns it. That continues until the upstream is fully consumed. Filter losesExactSizedand fast skip, since they are no longer valid after filtering. -
seq.Map[T, R]andseq.FlatMap[T, R]work by creating new sequences lazily applying the given transformation to them. They are not methods, but functions in theseqpackage. That is because Go methods currently can't introduce new generic parameters - this limits the fluency of the API, but that will change when generic methods finally arrive.
As a rule of thumb - operations that leave the sequence's
element type unchanged are defined as methods on seq.Seq and
operations that transform the element type are functions in the seq package.
-
Items in the sequence can be tested against predicates using
Find,Any,All, andNone. -
Items in the sequence can be counted using
CountandCountU64. -
An arbitrary function can be called on all items in the sequence using
ForEach,ForEachUntil, andForEachIndexed- with each having finer-grained control than the previous one. These operations are used to cause side-effects, they don't produce new sequences and don't collect items, unless you explicitly do so inside a closure you pass as an argument. -
Items can be collected using
Seq.ToSlice[T],Seq.ToChan[T],seq.ToMap[T, K, V],seq.ToMapMerge[T, K, V]or some of their versions that allow specifying capacity hints. -
Items can be grouped using
seq.GroupBy[T, K]. -
Sequences can be converted to
iter.SequsingIterAllandIterSlice.
See docs for the full list of available operations.
Transformations on a sequence (upstream) produce a new sequence (downstream), which shares the underlying source with the upstream. The items consumed from downstream sequences are consumed from upstream all sequences unless different behavior is explicitly documented.
As a rule of thumb: after applying a transformation, do not continue using
the original sequence unless it is explicitly specified to be safe.
For example, it is safe to use the upstream after Take or Skip - it will
contain the remaining items. As another example, Count will not consume
any items at all if exact size of the sequence is known.
IsExactSized can be used to check if exact size of the sequence is known.
The next example is intentionally overengineered to demonstrate how Seq
integrates seamlessly with other Chainyq collections, enabling complex
pipelines. While this particular case is simple, the same pattern becomes
much more useful in advanced tasks.
import (
"github.com/zelr0x/chainyq/seq"
"github.com/zelr0x/chainyq/deque"
)
func main() {
slice := []string{"apple", "apricot", "banana", "cake"}
s := seq.FromSlice(slice)
d := deque.FromSlice(seq.Map(s, func(x string) {
return len(x)
}).ToSlice())
d.PopBack()
sliceByEvenOdd := evenOdd(d.Iter())
}
func evenOdd[T ~int](n chainyq.Nexter[T]) map[bool][]T {
s := seq.New(n.Next)
return seq.GroupBy(s, func(x T) bool {
return x%2 == 0
})
}
