Compare commits

...

9 Commits

Author SHA1 Message Date
Eric Coissac
c188580aac Replace Rebatch with RebatchBySize using default batch parameters
Replace calls to Rebatch(size) with RebatchBySize(obidefault.BatchMem(), obidefault.BatchSizeMax()) in batchiterator.go, fragment.go, and obirefidx.go to ensure consistent use of default memory and size limits for batch rebatching.
2026-03-13 15:16:33 +01:00
Eric Coissac
1e1f575d1c refactor: replace single batch size with min/max bounds and memory limits
Introduce separate _BatchSize (min) and _BatchSizeMax (max) constants to replace the single _BatchSize variable. Update RebatchBySize to accept both maxBytes and maxCount parameters, flushing when either limit is exceeded. Set default batch size min to 1, max to 2000, and memory limit to 128 MB. Update CLI options and sequence_reader.go accordingly.
2026-03-13 15:07:35 +01:00
Eric Coissac
40769bf827 Add memory-based batching support
Implement memory-aware batch sizing with --batch-mem CLI option, enabling adaptive batching based on estimated sequence memory footprint. Key changes:
- Added _BatchMem and related getters/setters in pkg/obidefault
- Implemented RebatchBySize() in pkg/obiter for memory-constrained batching
- Added BioSequence.MemorySize() for conservative memory estimation
- Integrated batch-mem option in pkg/obioptions with human-readable size parsing (e.g., 128K, 64M, 1G)
- Added obiutils.ParseMemSize/FormatMemSize for unit conversion
- Enhanced pool GC in pkg/obiseq/pool.go to trigger explicit GC for large slice discards
- Updated sequence_reader.go to apply memory-based rebatching when enabled
2026-03-13 14:54:21 +01:00
Eric Coissac
74e6fcaf83 feat: add static linking for Linux builds using musl
Enable static linking for Linux binaries by installing musl-tools and passing appropriate LDFLAGS during build. This ensures portable, self-contained executables for Linux targets.
2026-03-13 14:26:31 +01:00
coissac
30ec8b1b63 Merge pull request #92 from metabarcoding/push-mvpuxnxoyypu
4.4.21: Parallel builds, robust installation, and rope-based parsing enhancements
2026-03-13 12:00:32 +01:00
Eric Coissac
cdc72c5346 4.4.21: Parallel builds, robust installation, and rope-based parsing enhancements
This release introduces significant improvements to build reliability and performance, alongside key parsing enhancements for sequence data.

### Build & Installation Improvements
- Added support for parallel compilation via `-j/--jobs` option in both the Makefile and install script, enabling faster builds on multi-core systems. The default remains single-threaded for safety.
- Enhanced Makefile with `.DEFAULT_GOAL := all` for consistent behavior and a documented `help` target.
- Replaced fragile file operations with robust error handling, clear diagnostics, and automatic preservation of the build directory on copy failures to aid recovery.

### Rope-Based Parsing Enhancements (from 4.4.20)
- Introduced direct rope-based parsers for FASTA, EMBL, and FASTQ formats, improving memory efficiency for large files.
- Added U→T conversion support during sequence extraction and more reliable line ending detection.
- Unified rope scanning logic under a new `ropeScanner` for better maintainability.
- Added `TakeQualities()` method to BioSequence for more efficient handling of quality data.

### Bug Fixes (from 4.4.20)
- Fixed `CompressStream` to correctly respect the `compressed` variable.
- Replaced ambiguous string splitting utilities with precise left/right split variants (`LeftSplitInTwo`, `RightSplitInTwo`).

### Release Tooling (from 4.4.20)
- Streamlined release process with modular targets (`jjpush-notes`, `jjpush-push`, `jjpush-tag`) and AI-assisted note generation via `aichat`.
- Improved versioning support via the `VERSION` environment variable in `bump-version`.
- Switched PR submission from raw `jj git push` to `stakk` for consistency and reliability.

Note: This release incorporates key enhancements from 4.4.20 that impact end users, while focusing on build robustness and performance gains.
2026-03-13 11:59:32 +01:00
Eric Coissac
82a9972be7 Add parallel compilation support and improve Makefile/install script robustness
- Add .DEFAULT_GOAL := all to Makefile for consistent default target
- Document -j/--jobs option in README.md to allow parallel compilation
- Add JOBS variable and -j/--jobs argument to install script (default: 1)
- Replace fragile mkdir/cp commands with robust error handling and clear diagnostics
- Add build directory preservation on copy failure for manual recovery
- Pass -j option to make during compilation to enable parallel builds
2026-03-13 11:59:20 +01:00
coissac
ff6e515b2a Merge pull request #91 from metabarcoding/push-uotrstkymowq
4.4.20: Rope-based parsing, improved release tooling, and bug fixes
2026-03-12 20:15:33 +01:00
Eric Coissac
cd0c525f50 4.4.20: Rope-based parsing, improved release tooling, and bug fixes
### Enhancements
- **Rope-based parsing**: Added direct rope parsing for FASTA, EMBL, and FASTQ formats via `FastaChunkParserRope`, `EmblChunkParserRope`, and `FastqChunkParserRope`. Sequence extraction now supports U→T conversion and improved line ending detection.
- **Rope scanner refactoring**: Unified rope scanning logic under a new `ropeScanner`, improving maintainability and consistency.
- **Sequence handling**: Added `TakeQualities()` method to BioSequence for more efficient quality data handling.

### Bug Fixes
- **Compression behavior**: Fixed `CompressStream` to correctly use the `compressed` variable instead of a hardcoded boolean.
- **String splitting**: Replaced ambiguous `SplitInTwo` calls with precise `LeftSplitInTwo` or `RightSplitInTwo`, and added dedicated right-split utility.

### Tooling & Workflow Improvements
- **Makefile enhancements**: Added colored terminal output, a `help` target for documenting all targets, and improved release workflow automation.
- **Release process**: Refactored `jjpush` into modular targets (`jjpush-notes`, `jjpush-push`, `jjpush-tag`), replaced `orla` with `aichat` for AI-assisted release notes, and introduced robust JSON parsing using Python. Release notes are now generated and stored in temp files for tag creation.
- **Versioning**: `bump-version` now supports the VERSION environment variable for manual version setting.
- **Submission**: Switched from raw `jj git push` to `stakk` for PR submission.

### Internal Notes
- Installation instructions are now included in release tags.
- Fixed-size carry buffer replaced with dynamic slice for arbitrarily long line support without extra allocations.
2026-03-12 20:14:11 +01:00
15 changed files with 310 additions and 19 deletions

View File

@@ -62,6 +62,12 @@ jobs:
TAG=${GITHUB_REF#refs/tags/Release_}
echo "version=$TAG" >> $GITHUB_OUTPUT
- name: Install build tools (Linux)
if: runner.os == 'Linux'
run: |
sudo apt-get update -q
sudo apt-get install -y musl-tools
- name: Install build tools (macOS)
if: runner.os == 'macOS'
run: |
@@ -74,8 +80,13 @@ jobs:
GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.goarch }}
VERSION: ${{ steps.get_version.outputs.version }}
CC: ${{ matrix.goos == 'linux' && 'musl-gcc' || '' }}
run: |
make obitools
if [ "$GOOS" = "linux" ]; then
make LDFLAGS='-linkmode=external -extldflags=-static' obitools
else
make obitools
fi
mkdir -p artifacts
# Create a single tar.gz with all binaries for this platform
tar -czf artifacts/obitools4_${VERSION}_${{ matrix.output_name }}.tar.gz -C build .

View File

@@ -2,14 +2,17 @@
#export GOBIN=$(GOPATH)/bin
#export PATH=$(GOBIN):$(shell echo $${PATH})
.DEFAULT_GOAL := all
GREEN := \033[0;32m
YELLOW := \033[0;33m
BLUE := \033[0;34m
NC := \033[0m
GOFLAGS=
LDFLAGS=
GOCMD=go
GOBUILD=$(GOCMD) build $(GOFLAGS)
GOBUILD=$(GOCMD) build $(GOFLAGS) $(if $(LDFLAGS),-ldflags='$(LDFLAGS)')
GOGENERATE=$(GOCMD) generate
GOCLEAN=$(GOCMD) clean
GOTEST=$(GOCMD) test

View File

@@ -32,8 +32,12 @@ The installation script offers several options:
>
> -p, --obitools-prefix Prefix added to the obitools command names if you
> want to have several versions of obitools at the
> same time on your system (as example `-p g` will produce
> same time on your system (as example `-p g` will produce
> `gobigrep` command instead of `obigrep`).
>
> -j, --jobs Number of parallel jobs used for compilation
> (default: 1). Increase this value to speed up
> compilation on multi-core systems (e.g., `-j 4`).
### Examples

View File

@@ -7,6 +7,7 @@ INSTALL_DIR="/usr/local"
OBITOOLS_PREFIX=""
VERSION=""
LIST_VERSIONS=false
JOBS=1
# Help message
function display_help {
@@ -21,6 +22,7 @@ function display_help {
echo " gobigrep command instead of obigrep)."
echo " -v, --version Install a specific version (e.g., 4.4.8)."
echo " If not specified, installs the latest version."
echo " -j, --jobs Number of parallel jobs for compilation (default: 1)."
echo " -l, --list List all available versions and exit."
echo " -h, --help Display this help message."
echo ""
@@ -65,6 +67,10 @@ while [ "$#" -gt 0 ]; do
VERSION="$2"
shift 2
;;
-j|--jobs)
JOBS="$2"
shift 2
;;
-l|--list)
LIST_VERSIONS=true
shift
@@ -122,9 +128,15 @@ mkdir -p "${WORK_DIR}/cache" \
exit 1)
# Create installation directory
mkdir -p "${INSTALL_DIR}/bin" 2> /dev/null \
|| (echo "Please enter your password for installing obitools in ${INSTALL_DIR}" 1>&2
sudo mkdir -p "${INSTALL_DIR}/bin")
if ! mkdir -p "${INSTALL_DIR}/bin" 2>/dev/null; then
if [ ! -w "$(dirname "${INSTALL_DIR}")" ] && [ ! -w "${INSTALL_DIR}" ]; then
echo "Please enter your password for installing obitools in ${INSTALL_DIR}" 1>&2
sudo mkdir -p "${INSTALL_DIR}/bin"
else
echo "Error: Could not create ${INSTALL_DIR}/bin (check path or disk space)" 1>&2
exit 1
fi
fi
if [[ ! -d "${INSTALL_DIR}/bin" ]]; then
echo "Could not create ${INSTALL_DIR}/bin directory for installing obitools" 1>&2
@@ -208,16 +220,29 @@ mkdir -p vendor
# Build with or without prefix
if [[ -z "$OBITOOLS_PREFIX" ]] ; then
make GOFLAGS="-buildvcs=false"
make -j"${JOBS}" obitools GOFLAGS="-buildvcs=false"
else
make GOFLAGS="-buildvcs=false" OBITOOLS_PREFIX="${OBITOOLS_PREFIX}"
make -j"${JOBS}" obitools GOFLAGS="-buildvcs=false" OBITOOLS_PREFIX="${OBITOOLS_PREFIX}"
fi
# Install binaries
echo "Installing binaries to ${INSTALL_DIR}/bin..." 1>&2
(cp build/* "${INSTALL_DIR}/bin" 2> /dev/null) \
|| (echo "Please enter your password for installing obitools in ${INSTALL_DIR}" 1>&2
sudo cp build/* "${INSTALL_DIR}/bin")
if ! cp build/* "${INSTALL_DIR}/bin" 2>/dev/null; then
if [ ! -w "${INSTALL_DIR}/bin" ]; then
echo "Please enter your password for installing obitools in ${INSTALL_DIR}" 1>&2
sudo cp build/* "${INSTALL_DIR}/bin"
else
echo "Error: Could not copy binaries to ${INSTALL_DIR}/bin" 1>&2
echo " Source files: $(ls build/ 2>/dev/null || echo 'none found')" 1>&2
echo "" 1>&2
echo "The build directory has been preserved for manual recovery:" 1>&2
echo " $(pwd)/build/" 1>&2
echo "You can install manually with:" 1>&2
echo " cp $(pwd)/build/* ${INSTALL_DIR}/bin/" 1>&2
popd > /dev/null || true
exit 1
fi
fi
popd > /dev/null || exit

View File

@@ -1,6 +1,12 @@
package obidefault
var _BatchSize = 2000
// _BatchSize is the minimum number of sequences per batch (floor).
// Used as the minSeqs argument to RebatchBySize.
var _BatchSize = 1
// _BatchSizeMax is the maximum number of sequences per batch (ceiling).
// A batch is flushed when this count is reached regardless of memory usage.
var _BatchSizeMax = 2000
// SetBatchSize sets the size of the sequence batches.
//
@@ -24,3 +30,42 @@ func BatchSize() int {
func BatchSizePtr() *int {
return &_BatchSize
}
// BatchSizeMax returns the maximum number of sequences per batch.
func BatchSizeMax() int {
return _BatchSizeMax
}
func BatchSizeMaxPtr() *int {
return &_BatchSizeMax
}
// _BatchMem holds the maximum cumulative memory (in bytes) per batch when
// memory-based batching is requested. A value of 0 disables memory-based
// batching and falls back to count-based batching.
var _BatchMem = 128 * 1024 * 1024 // 128 MB default; set to 0 to disable
var _BatchMemStr = ""
// SetBatchMem sets the memory budget per batch in bytes.
func SetBatchMem(n int) {
_BatchMem = n
}
// BatchMem returns the current memory budget per batch in bytes.
// A value of 0 means memory-based batching is disabled.
func BatchMem() int {
return _BatchMem
}
func BatchMemPtr() *int {
return &_BatchMem
}
// BatchMemStr returns the raw --batch-mem string value as provided on the CLI.
func BatchMemStr() string {
return _BatchMemStr
}
func BatchMemStrPtr() *string {
return &_BatchMemStr
}

View File

@@ -444,6 +444,67 @@ func (iterator IBioSequence) Rebatch(size int) IBioSequence {
return newIter
}
// RebatchBySize reorganises the stream into batches bounded by two independent
// upper limits: maxCount (max number of sequences) and maxBytes (max cumulative
// estimated memory). A batch is flushed as soon as either limit would be
// exceeded. A single sequence larger than maxBytes is always emitted alone.
// Passing 0 for a limit disables that constraint; if both are 0 it falls back
// to Rebatch(obidefault.BatchSizeMax()).
func (iterator IBioSequence) RebatchBySize(maxBytes int, maxCount int) IBioSequence {
if maxBytes <= 0 && maxCount <= 0 {
return iterator.Rebatch(obidefault.BatchSizeMax())
}
newIter := MakeIBioSequence()
newIter.Add(1)
go func() {
newIter.WaitAndClose()
}()
go func() {
order := 0
iterator = iterator.SortBatches()
buffer := obiseq.MakeBioSequenceSlice()
bufBytes := 0
source := ""
flush := func() {
if len(buffer) > 0 {
newIter.Push(MakeBioSequenceBatch(source, order, buffer))
order++
buffer = obiseq.MakeBioSequenceSlice()
bufBytes = 0
}
}
for iterator.Next() {
seqs := iterator.Get()
source = seqs.Source()
for _, s := range seqs.Slice() {
sz := s.MemorySize()
countFull := maxCount > 0 && len(buffer) >= maxCount
memFull := maxBytes > 0 && bufBytes+sz > maxBytes && len(buffer) > 0
if countFull || memFull {
flush()
}
buffer = append(buffer, s)
bufBytes += sz
}
}
flush()
newIter.Done()
}()
if iterator.IsPaired() {
newIter.MarkAsPaired()
}
return newIter
}
func (iterator IBioSequence) FilterEmpty() IBioSequence {
newIter := MakeIBioSequence()
@@ -638,7 +699,7 @@ func (iterator IBioSequence) FilterOn(predicate obiseq.SequencePredicate,
trueIter.MarkAsPaired()
}
return trueIter.Rebatch(size)
return trueIter.RebatchBySize(obidefault.BatchMem(), obidefault.BatchSizeMax())
}
func (iterator IBioSequence) FilterAnd(predicate obiseq.SequencePredicate,
@@ -694,7 +755,7 @@ func (iterator IBioSequence) FilterAnd(predicate obiseq.SequencePredicate,
trueIter.MarkAsPaired()
}
return trueIter.Rebatch(size)
return trueIter.RebatchBySize(obidefault.BatchMem(), obidefault.BatchSizeMax())
}
// Load all sequences availables from an IBioSequenceBatch iterator into

View File

@@ -3,6 +3,7 @@ package obiiter
import (
log "github.com/sirupsen/logrus"
"git.metabarcoding.org/obitools/obitools4/obitools4/pkg/obidefault"
"git.metabarcoding.org/obitools/obitools4/obitools4/pkg/obiseq"
)
@@ -70,7 +71,7 @@ func IFragments(minsize, length, overlap, size, nworkers int) Pipeable {
}
go f(iterator)
return newiter.SortBatches().Rebatch(size)
return newiter.SortBatches().RebatchBySize(obidefault.BatchMem(), obidefault.BatchSizeMax())
}
return ifrg

View File

@@ -8,6 +8,7 @@ import (
"git.metabarcoding.org/obitools/obitools4/obitools4/pkg/obidefault"
"git.metabarcoding.org/obitools/obitools4/obitools4/pkg/obiformats"
"git.metabarcoding.org/obitools/obitools4/obitools4/pkg/obiutils"
log "github.com/sirupsen/logrus"
"github.com/DavidGamba/go-getoptions"
@@ -55,7 +56,15 @@ func RegisterGlobalOptions(options *getoptions.GetOpt) {
options.IntVar(obidefault.BatchSizePtr(), "batch-size", obidefault.BatchSize(),
options.GetEnv("OBIBATCHSIZE"),
options.Description("Number of sequence per batch for paralelle processing"))
options.Description("Minimum number of sequences per batch (floor, default 1)"))
options.IntVar(obidefault.BatchSizeMaxPtr(), "batch-size-max", obidefault.BatchSizeMax(),
options.GetEnv("OBIBATCHSIZEMAX"),
options.Description("Maximum number of sequences per batch (ceiling, default 2000)"))
options.StringVar(obidefault.BatchMemStrPtr(), "batch-mem", "",
options.GetEnv("OBIBATCHMEM"),
options.Description("Maximum memory per batch (e.g. 128K, 64M, 1G; default: 128M). Set to 0 to disable."))
options.Bool("solexa", false,
options.GetEnv("OBISOLEXA"),
@@ -157,6 +166,15 @@ func ProcessParsedOptions(options *getoptions.GetOpt, parseErr error) {
if options.Called("solexa") {
obidefault.SetReadQualitiesShift(64)
}
if options.Called("batch-mem") {
n, err := obiutils.ParseMemSize(obidefault.BatchMemStr())
if err != nil {
log.Fatalf("Invalid --batch-mem value %q: %v", obidefault.BatchMemStr(), err)
}
obidefault.SetBatchMem(n)
log.Printf("Memory-based batching enabled: %s per batch", obidefault.BatchMemStr())
}
}
func GenerateOptionParser(program string,

View File

@@ -3,7 +3,7 @@ package obioptions
// Version is automatically updated by the Makefile from version.txt
// The patch number (third digit) is incremented on each push to the repository
var _Version = "Release 4.4.19"
var _Version = "Release 4.4.21"
// Version returns the version of the obitools package.
//

View File

@@ -273,6 +273,28 @@ func (s *BioSequence) Len() int {
return len(s.sequence)
}
// MemorySize returns an estimate of the memory footprint of the BioSequence
// in bytes. It accounts for the sequence, quality scores, feature data,
// annotations, and fixed struct overhead. The estimate is conservative
// (cap rather than len for byte slices) so it is suitable for memory-based
// batching decisions.
func (s *BioSequence) MemorySize() int {
if s == nil {
return 0
}
// fixed struct overhead (strings, pointers, mutex pointer)
const overhead = 128
n := overhead
n += cap(s.sequence)
n += cap(s.qualities)
n += cap(s.feature)
n += len(s.id)
n += len(s.source)
// rough annotation estimate: each key+value pair ~64 bytes on average
n += len(s.annotations) * 64
return n
}
// HasQualities checks if the BioSequence has sequence qualitiy scores.
//
// This function does not have any parameters.

View File

@@ -1,13 +1,20 @@
package obiseq
import (
"runtime"
"sync"
"sync/atomic"
log "github.com/sirupsen/logrus"
"git.metabarcoding.org/obitools/obitools4/obitools4/pkg/obiutils"
)
const _LargeSliceThreshold = 100 * 1024 // 100 kb — below: leave to GC, above: trigger explicit GC
const _GCBytesBudget = int64(256 * 1024 * 1024) // trigger GC every 256 MB of large discards
var _largeSliceDiscardedBytes = atomic.Int64{}
var _BioSequenceByteSlicePool = sync.Pool{
New: func() interface{} {
bs := make([]byte, 0, 300)
@@ -34,6 +41,13 @@ func RecycleSlice(s *[]byte) {
}
if cap(*s) <= 1024 {
_BioSequenceByteSlicePool.Put(s)
} else if cap(*s) >= _LargeSliceThreshold {
n := int64(cap(*s))
*s = nil
prev := _largeSliceDiscardedBytes.Load()
if _largeSliceDiscardedBytes.Add(n)/_GCBytesBudget > prev/_GCBytesBudget {
runtime.GC()
}
}
}
}

View File

@@ -214,6 +214,8 @@ func CLIReadBioSequences(filenames ...string) (obiiter.IBioSequence, error) {
iterator = iterator.Speed("Reading sequences")
iterator = iterator.RebatchBySize(obidefault.BatchMem(), obidefault.BatchSizeMax())
return iterator, nil
}

View File

@@ -291,5 +291,5 @@ func IndexReferenceDB(iterator obiiter.IBioSequence) obiiter.IBioSequence {
go f()
}
return indexed.Rebatch(obidefault.BatchSize())
return indexed.RebatchBySize(obidefault.BatchMem(), obidefault.BatchSizeMax())
}

85
pkg/obiutils/memsize.go Normal file
View File

@@ -0,0 +1,85 @@
package obiutils
import (
"fmt"
"strconv"
"strings"
"unicode"
)
// ParseMemSize parses a human-readable memory size string and returns the
// equivalent number of bytes. The value is a number optionally followed by a
// unit suffix (case-insensitive):
//
// B or (no suffix) — bytes
// K or KB — kibibytes (1 024)
// M or MB — mebibytes (1 048 576)
// G or GB — gibibytes (1 073 741 824)
// T or TB — tebibytes (1 099 511 627 776)
//
// Examples: "512", "128K", "128k", "64M", "1G", "2GB"
func ParseMemSize(s string) (int, error) {
s = strings.TrimSpace(s)
if s == "" {
return 0, fmt.Errorf("empty memory size string")
}
// split numeric prefix from unit suffix
i := 0
for i < len(s) && (unicode.IsDigit(rune(s[i])) || s[i] == '.') {
i++
}
numStr := s[:i]
unit := strings.ToUpper(strings.TrimSpace(s[i:]))
// strip trailing 'B' from two-letter units (KB→K, MB→M …)
if len(unit) == 2 && unit[1] == 'B' {
unit = unit[:1]
}
val, err := strconv.ParseFloat(numStr, 64)
if err != nil {
return 0, fmt.Errorf("invalid memory size %q: %w", s, err)
}
var multiplier float64
switch unit {
case "", "B":
multiplier = 1
case "K":
multiplier = 1024
case "M":
multiplier = 1024 * 1024
case "G":
multiplier = 1024 * 1024 * 1024
case "T":
multiplier = 1024 * 1024 * 1024 * 1024
default:
return 0, fmt.Errorf("unknown memory unit %q in %q", unit, s)
}
return int(val * multiplier), nil
}
// FormatMemSize formats a byte count as a human-readable string with the
// largest unit that produces a value ≥ 1 (e.g. 1536 → "1.5K").
func FormatMemSize(n int) string {
units := []struct {
suffix string
size int
}{
{"T", 1024 * 1024 * 1024 * 1024},
{"G", 1024 * 1024 * 1024},
{"M", 1024 * 1024},
{"K", 1024},
}
for _, u := range units {
if n >= u.size {
v := float64(n) / float64(u.size)
if v == float64(int(v)) {
return fmt.Sprintf("%d%s", int(v), u.suffix)
}
return fmt.Sprintf("%.1f%s", v, u.suffix)
}
}
return fmt.Sprintf("%dB", n)
}

View File

@@ -1 +1 @@
4.4.19
4.4.21