Skip to content

Commit

Permalink
chore: MaxTableSize has been renamed to BaseTableSize (#2038)
Browse files Browse the repository at this point in the history
It seems there were some mentions of MaxTableSize around, which does not
exist anymore.
  • Loading branch information
mitar committed Jan 6, 2024
1 parent 7b5baa1 commit 1c417aa
Show file tree
Hide file tree
Showing 6 changed files with 6 additions and 6 deletions.
2 changes: 1 addition & 1 deletion db.go
Expand Up @@ -162,7 +162,7 @@ func checkAndSetOptions(opt *Options) error {
// the transaction APIs. Transaction batches entries into batches of size opt.maxBatchSize.
if opt.ValueThreshold > opt.maxBatchSize {
return errors.Errorf("Valuethreshold %d greater than max batch size of %d. Either "+
"reduce opt.ValueThreshold or increase opt.MaxTableSize.",
"reduce opt.ValueThreshold or increase opt.BaseTableSize.",
opt.ValueThreshold, opt.maxBatchSize)
}
// ValueLogFileSize should be stricly LESS than 2<<30 otherwise we will
Expand Down
2 changes: 1 addition & 1 deletion db_test.go
Expand Up @@ -1800,7 +1800,7 @@ func TestLSMOnly(t *testing.T) {

// Also test for error, when ValueThresholdSize is greater than maxBatchSize.
dopts.ValueThreshold = LSMOnlyOptions(dir).ValueThreshold
// maxBatchSize is calculated from MaxTableSize.
// maxBatchSize is calculated from BaseTableSize.
dopts.MemTableSize = LSMOnlyOptions(dir).ValueThreshold
_, err = Open(dopts)
require.Error(t, err, "db creation should have been failed")
Expand Down
2 changes: 1 addition & 1 deletion docs/content/faq/index.md
Expand Up @@ -57,7 +57,7 @@ workloads, you should be using the `Transaction` API.

If you're using Badger with `SyncWrites=false`, then your writes might not be written to value log
and won't get synced to disk immediately. Writes to LSM tree are done inmemory first, before they
get compacted to disk. The compaction would only happen once `MaxTableSize` has been reached. So, if
get compacted to disk. The compaction would only happen once `BaseTableSize` has been reached. So, if
you're doing a few writes and then checking, you might not see anything on disk. Once you `Close`
the database, you'll see these writes on disk.

Expand Down
2 changes: 1 addition & 1 deletion docs/content/get-started/index.md
Expand Up @@ -603,7 +603,7 @@ the `Options` struct that is passed in when opening the database using
- If you modify `Options.NumMemtables`, also adjust `Options.NumLevelZeroTables` and
`Options.NumLevelZeroTablesStall` accordingly.
- Number of concurrent compactions (`Options.NumCompactors`)
- Size of table (`Options.MaxTableSize`)
- Size of table (`Options.BaseTableSize`)
- Size of value log file (`Options.ValueLogFileSize`)

If you want to decrease the memory usage of Badger instance, tweak these
Expand Down
2 changes: 1 addition & 1 deletion options.go
Expand Up @@ -463,7 +463,7 @@ func (opt Options) WithLoggingLevel(val loggingLevel) Options {
return opt
}

// WithBaseTableSize returns a new Options value with MaxTableSize set to the given value.
// WithBaseTableSize returns a new Options value with BaseTableSize set to the given value.
//
// BaseTableSize sets the maximum size in bytes for LSM table or file in the base level.
//
Expand Down
2 changes: 1 addition & 1 deletion stream_writer_test.go
Expand Up @@ -349,7 +349,7 @@ func TestStreamWriter6(t *testing.T) {
}
}

// list has 3 pairs for equal keys. Since each Key has size equal to MaxTableSize
// list has 3 pairs for equal keys. Since each Key has size equal to BaseTableSize
// we would have 6 tables, if keys are not equal. Here we should have 3 tables.
sw := db.NewStreamWriter()
require.NoError(t, sw.Prepare(), "sw.Prepare() failed")
Expand Down

0 comments on commit 1c417aa

Please sign in to comment.