Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.IntSupplier;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.client.RegionInfo;
import org.apache.hadoop.hbase.conf.ConfigurationManager;
import org.apache.hadoop.hbase.conf.PropagatingConfigurationObserver;
Expand Down Expand Up @@ -344,6 +345,12 @@ protected void requestCompactionInternal(HRegion region, HStore store, String wh
return;
}

// Should not allow compaction if cluster is in read-only mode
if (isReadOnlyEnabled()) {
LOG.info("Ignoring compaction request for " + region + ",because read-only mode is on.");
return;
}

Comment on lines +349 to +353
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we don't simply disable compaction altogether in the read replica cluster? See line #343 in CompactionSplit, there's already a check for compaction enabled flag. I would rather refrain from polluting CompactiSplit code with logic for read replica.

Copy link
Author

@sharmaar12 sharmaar12 Oct 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use that approach but then one issue I can think of is that hbase.global.readonly.enabled property is dynamically configurable using update_all_config but is it true for hbase.hstore.compaction.enabled also?

Copy link
Contributor

@anmolnar anmolnar Oct 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like @wchevreuil 's idea.
How about adding the read-only check to the getter?

  public boolean isCompactionsEnabled() {
    return compactionsEnabled && !isReadOnlyEnabled();
  }

You don't need to dynamically change the compaction flag.
wdyt?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then we may need to at least modify the log messages to mention that either compaction is disabled or readonly mode is on. Otherwise compaction may be enabled but we are logging it as disabled because of read-only mode.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LOG.info("Ignoring compaction request for " + region + 
  (!isReadOnlyEnabled ? ", because compaction is disabled." : " in read-only mode"));

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or just leave it as is, not a biggy

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hbase.hstore.compaction.enabled

The actual property name is hbase.regionserver.compaction.enabled. Compaction is actual "switchable" via the Admin.compactionSwitch() method (we also expose an hbase shell command for that). The CompactSplit thread itself exposes a switchCompaction method which could be called on both RS startup and the dynamic config handler for the hbase.global.readonly.enabled property.

Copy link
Contributor

@anmolnar anmolnar Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Be careful with switching the property directly. User might have intentionally disabled it and you should not enable it when go from R/O -> R/W mode. My approach seems safer to me.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Be careful with switching the property directly. User might have intentionally disabled it and you should not enable it when go from R/O -> R/W mode. My approach seems safer to me.

Good point. Let's just do all checks inside isCompactionsEnabled, as @anmolnar suggested.

if (
this.server.isStopped() || (region.getTableDescriptor() != null
&& !region.getTableDescriptor().isCompactionEnabled())
Expand Down Expand Up @@ -442,6 +449,13 @@ private Optional<CompactionContext> selectCompaction(HRegion region, HStore stor
LOG.info(String.format("User has disabled compactions"));
return Optional.empty();
}

// Should not allow compaction if cluster is in read-only mode
if (isReadOnlyEnabled()) {
LOG.info(String.format("Compaction request skipped as read-only mode is on"));
return Optional.empty();
}

Optional<CompactionContext> compaction = store.requestCompaction(priority, tracker, user);
if (!compaction.isPresent() && region.getRegionInfo() != null) {
String reason = "Not compacting " + region.getRegionInfo().getRegionNameAsString()
Expand Down Expand Up @@ -856,6 +870,11 @@ public boolean isCompactionsEnabled() {
return compactionsEnabled;
}

private boolean isReadOnlyEnabled() {
return conf.getBoolean(HConstants.HBASE_GLOBAL_READONLY_ENABLED_KEY,
HConstants.HBASE_GLOBAL_READONLY_ENABLED_DEFAULT);
}

public void setCompactionsEnabled(boolean compactionsEnabled) {
this.compactionsEnabled = compactionsEnabled;
this.conf.setBoolean(HBASE_REGION_SERVER_ENABLE_COMPACTION, compactionsEnabled);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ private HFileContext createFileContext(Compression.Algorithm compression,
public final StoreFileWriter createWriter(CreateStoreFileWriterParams params) throws IOException {
if (!isPrimaryReplica || isReadOnlyEnabled()) {
throw new IllegalStateException(
"Should not call create writer on secondary replicas or in read only mode");
"Should not call create writer on secondary replicas or in read-only mode");
}
// creating new cache config for each new writer
final CacheConfig cacheConf = ctx.getCacheConf();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,9 @@
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.regionserver.FlushLifeCycleTracker;
import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
import org.apache.hadoop.hbase.regionserver.Store;
import org.apache.hadoop.hbase.regionserver.StoreFile;
import org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker;
import org.apache.hadoop.hbase.util.Pair;
import org.apache.hadoop.hbase.wal.WALEdit;
import org.apache.yetus.audience.InterfaceAudience;
Expand Down Expand Up @@ -81,6 +84,7 @@ private void internalReadOnlyGuard() throws IOException {

@Override
public void start(CoprocessorEnvironment env) throws IOException {

this.globalReadOnlyEnabled =
env.getConfiguration().getBoolean(HConstants.HBASE_GLOBAL_READONLY_ENABLED_KEY,
HConstants.HBASE_GLOBAL_READONLY_ENABLED_DEFAULT);
Expand Down Expand Up @@ -131,6 +135,13 @@ public void preFlush(final ObserverContext<? extends RegionCoprocessorEnvironmen
internalReadOnlyGuard();
}

@Override
public void preCompactSelection(ObserverContext<? extends RegionCoprocessorEnvironment> c,
Store store, List<? extends StoreFile> candidates, CompactionLifeCycleTracker tracker)
throws IOException {
internalReadOnlyGuard();
}

@Override
public boolean preCheckAndPut(ObserverContext<? extends RegionCoprocessorEnvironment> c,
byte[] row, byte[] family, byte[] qualifier, CompareOperator op, ByteArrayComparable comparator,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hbase.regionserver;

import static org.junit.Assert.assertEquals;
import static org.mockito.Mockito.mock;

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker;
import org.apache.hadoop.hbase.security.User;
import org.apache.hadoop.hbase.testclassification.RegionServerTests;
import org.apache.hadoop.hbase.testclassification.SmallTests;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.junit.experimental.categories.Category;

@Category({ RegionServerTests.class, SmallTests.class })
public class TestCompactSplitReadOnly {

private CompactSplit compactSplit;
private Configuration conf;

@Before
public void setUp() {
conf = new Configuration();
// enable read-only mode
conf.setBoolean(HConstants.HBASE_GLOBAL_READONLY_ENABLED_KEY, true);
// create CompactSplit with conf-only constructor (available for tests)
compactSplit = new CompactSplit(conf);
}

@After
public void tearDown() {
// ensure thread pools are shutdown to avoid leakage
compactSplit.interruptIfNecessary();
}

@Test
public void testRequestSystemCompactionIgnoredWhenReadOnly() throws IOException {
// Mock HRegion and HStore minimal behavior
HRegion region = mock(HRegion.class);
HStore store = mock(HStore.class);

// Ensure compaction queues start empty
assertEquals(0, compactSplit.getCompactionQueueSize());

// Call requestSystemCompaction for a single store (selectNow = false)
compactSplit.requestSystemCompaction(region, store, "test-readonly");

// Because read-only mode is enabled, no compaction should be queued
assertEquals(0, compactSplit.getCompactionQueueSize());
}

@Test
public void testSelectCompactionIgnoredWhenReadOnly() throws IOException {
HRegion region = mock(HRegion.class);
HStore store = mock(HStore.class);

// Ensure compaction queues start empty
assertEquals(0, compactSplit.getCompactionQueueSize());

// Call requestCompaction which uses selectNow = true and should call selectCompaction
compactSplit.requestCompaction(region, store, "test-select-readonly", 0,
CompactionLifeCycleTracker.DUMMY, (User) null);

// Because read-only mode is enabled, selectCompaction should be skipped and no task queued
assertEquals(0, compactSplit.getCompactionQueueSize());
}
}