-
Notifications
You must be signed in to change notification settings - Fork 35.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RPC: Add maxfeerate and maxburnamount args to submitpackage #28950
RPC: Add maxfeerate and maxburnamount args to submitpackage #28950
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code CoverageFor detailed information about the code coverage, see the test coverage report. ReviewsSee the guideline for information on the review process.
If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
d1500bb
to
a573605
Compare
a573605
to
f9b6eed
Compare
f9b6eed
to
e650eed
Compare
rebased on #28848 and ready for review |
Concept ACK |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concept ACK
My ideal approach would be to not check client-side stuff inside validation, e.g. by adding a helper that loads coins (probably not extensible to chunk feerate but could maybe be a sufficient sanity check), or a test package accept + real submit (like the way BroadcastTransaction
does it). But if it's unavoidable, then this seems fine. AFAICT it's extensible to checking chunk feerate in the future.
Concept ACK I compiled from source, read through each modified line of code, ran the automated tests, and modified |
@@ -183,7 +183,7 @@ static RPCHelpMan testmempoolaccept() | |||
Chainstate& chainstate = chainman.ActiveChainstate(); | |||
const PackageMempoolAcceptResult package_result = [&] { | |||
LOCK(::cs_main); | |||
if (txns.size() > 1) return ProcessNewPackage(chainstate, mempool, txns, /*test_accept=*/true); | |||
if (txns.size() > 1) return ProcessNewPackage(chainstate, mempool, txns, /*test_accept=*/true, /*max_sane_feerate=*/{}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we test maxfeerate
and maxburnamount
for testmempoolaccept
with a package also?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
testmempoolaccept
is basically shoving all transactions, whether or not they're a pacakge, into AcceptMultipleTransactions
.
I'm not sure it makes sense due to this.
src/validation.cpp
Outdated
@@ -1295,6 +1312,15 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std:: | |||
results.emplace(ws.m_ptx->GetWitnessHash(), MempoolAcceptResult::Failure(ws.m_state)); | |||
return PackageMempoolAcceptResult(package_state, std::move(results)); | |||
} | |||
|
|||
// Individual modified feerate exceeded caller-deemed "sane" max; abort |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding of package submission is that a child with unconfirmed parents is bundled together as a package for incentive compatibility. Should this instead check the package fee rate against maxfeerate
?
If I understand correctly, currently, we are checking the package transaction fee rate individually against maxfeerate
. The child transactions might be greater than the maxfeerate
, but as a package might not be. In this case, we might reject the child transactions and accept the subpackage, which might not be incentive compatible at the current fee estimates.
For example, suppose we want to fee bump tx A, with the arrangement below. maxfeerate
is set to 50 s/vb.
All txs are 10vb
first value is absolute fee
second value is ancestor score
┌───────────┐
│ A (10s) │
│ 1 s/vb │
└───────────┘
│
▼
┌───────────┐
│ B (90s) │
│ 5 s/vb │
└───────────┘
│
▼
┌────────────┐
│ C (1000s) │
│ 36.6 s/vb │
└────────────┘
If maxfeerate
is 50s/vb we will accept A-B and reject C because it's fee rate is 100s/vb.
But if we are evaluating maxfeerate against package fee rate we will accept the whole package A-B-C
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ignoring topo restrictions in AcceptPackage
since C
should have A
as a direct ancestor,
The child transactions might be greater than the maxfeerate, but as a package might not be. In this case, we might reject the child transactions and accept the subpackage, which might not be incentive compatible at the current fee estimates.
This is prior to any relay, of course restricting feerates is incentive compatible once published on the open network!
If maxfeerate is 50s/vb we will accept A-B and reject C because it's ancestor score is 100s/vb.
But if we are evaluating maxfeerate against package fee rate we will accept the whole package A-B-C
Assuming you meant C's feerate is 100 sat/vbyte: If mempool minfee is 5 sat/vbyte, A+B would be accepted(as they'd be evaluated in their own subpackage already), then C evaluated on its own AcceptSingleTransaction
, and rejected.
In a post-cluster mempool world, this would likely be reduced to evaluating chunks only, and so you'd look at A+B+C only, and reject the whole thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming you meant C's feerate is 100 sat/vbyte
Yes C fee rated is 100s/vb, edited thanks.
This is prior to any relay, of course restricting feerates is incentive compatible once published on the open network
Do you mean incentive incompatible here? because in the given example if C is rejected (A,B) will be added to the mempool with mining score of 5s/vb which is lower than whats intended.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean incentive incompatible here? because in the given example if C is rejected (A,B) will be added to the mempool with mining score of 5s/vb which is lower than whats intended.
correct, typo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand why we should not check the package fee rate here because, in some arrangements, e.g., a parent transaction P will be cfp'ed by some child E, and E can have another child F, which is on its own.
Checking the package fee rate in this case is incorrect; we should instead evaluate using chunks fee rate later, as you said 👍🏾
{ "submitpackage", 1, "maxfeerate" }, | ||
{ "submitpackage", 2, "maxburnamount" }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we just enable passing maxfeerate
and maxburnamount
as a config variables since it is used three RPC's now sendrawtransaction
, testmempoolaccept
and submitpackage
, downstream might prefer this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
startup config? Maybe? Could open an issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in #29217
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note a good idea, because we have to restart whenever we want to update the threshold for an RPC call, can be resolved.
const CAmount max_burn_amount = request.params[2].isNull() ? 0 : AmountFromValue(request.params[2]); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We did not document this anywhere but the default behavior if maxburnamount
is not passed we are rejecting the package if there is any transactions with unspendable output greater than 0.
Maybe indicate it in the RPCHelpMan
of maxburnamount
?
Its also not indicated for sendrawtransaction
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added a mention of the default being 0
test/functional/rpc_packages.py
Outdated
assert_raises_rpc_error(-25, "Unspendable output exceeds maximum configured by user", node.submitpackage, chain_hex, 0, 0.1) | ||
assert_equal(node.getrawmempool(), []) | ||
|
||
# Turn off restrictions for both and send it; parent gets through as own subpackage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To turn off for maxfeerate you should pass 0 fee rate, you cant turn off the restriction of maxburnamount
nit: maybe reword to relax?
# Turn off restrictions for both and send it; parent gets through as own subpackage | |
# Relax the restrictions and send it; parent gets through as own subpackage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
taken
e650eed
to
bf6fbea
Compare
Isn't this check a little bit later than ideal? Wouldn't it make more sense to check both maxfeerate and maxburnamount as part of |
I think it's indeed good to have that, but isn't it still possible that transactions submitted via |
This PR is essentially focusing on the max fee check during submission to local mempool feature only. Wallet features to reduce accidental overpayment are also welcome, and may already exist in some forms? |
src/validation.cpp
Outdated
* due to fee miscalulation by wallets. std:nullopt implies unset, allowing any feerates. | ||
* Any individual transaction failing this check causes immediate failure. | ||
*/ | ||
const std::optional<CFeeRate> m_max_sane_feerate; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think calling this m_client_maxfeerate
would make more sense. Don't think "sane" tells us very much, and it'd be good to specify that this comes from the client. Generally I don't think "sanity check" is the right term for these, just "check," as the user can provide a strange or no value and we don't judge whether it makes sense or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
src/validation.cpp
Outdated
@@ -1295,6 +1312,15 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std:: | |||
results.emplace(ws.m_ptx->GetWitnessHash(), MempoolAcceptResult::Failure(ws.m_state)); | |||
return PackageMempoolAcceptResult(package_state, std::move(results)); | |||
} | |||
|
|||
// Individual modified feerate exceeded caller-deemed "sane" max; abort | |||
if (args.m_max_sane_feerate && CFeeRate(ws.m_modified_fees, ws.m_vsize) > args.m_max_sane_feerate.value()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see why we'd check individual feerate instead of chunk feerate, since we don't currently check whether there's a real CPFP happening and which transactions are in the chunk containing a CPFP. Could add a TODO item explaining this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added note
bf6fbea
to
83f34b4
Compare
1dcf47a
to
b6fe346
Compare
rebased |
ACK b6fe346 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concept ACK
@@ -823,6 +824,14 @@ static RPCHelpMan submitpackage() | |||
{"rawtx", RPCArg::Type::STR_HEX, RPCArg::Optional::OMITTED, ""}, | |||
}, | |||
}, | |||
{"maxfeerate", RPCArg::Type::AMOUNT, RPCArg::Default{FormatMoney(DEFAULT_MAX_RAW_TX_FEE_RATE.GetFeePerK())}, | |||
"Reject transactions whose fee rate is higher than the specified value, expressed in " + CURRENCY_UNIT + | |||
"/kvB.\nFee rates larger than 1BTC/kvB are rejected.\nSet to 0 to accept any fee rate."}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure that 100,000 ṩ/vB is not a bit high especially for a default that can be overridden? It seems to me that we would probably want at least one if not two magnitudes lower, especially if we are looking at a complete package and therefore already consider children bumping parents in the appropriate context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR isn't changing the current default, so imo it's best to open a separate issue to discuss it. This isn't looking at a complete package btw
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened in #29661
@@ -472,6 +472,11 @@ class MemPoolAccept | |||
* policies such as mempool min fee and min relay fee. | |||
*/ | |||
const bool m_package_feerates; | |||
/** Used for local submission of transactions to catch "absurd" fees | |||
* due to fee miscalulation by wallets. std:nullopt implies unset, allowing any feerates. | |||
* Any individual transaction failing this check causes immediate failure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the maxfeerate
perhaps rather apply to a transaction’s mining score in the package? That way, a child transactions bumping a large feerate could have a much larger nominal feerate, while staying in the sane range for the effective feerate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will become relatively trivial post-cluster mempool, reconsider it there?
* due to fee miscalulation by wallets. std:nullopt implies unset, allowing any feerates. | ||
* Any individual transaction failing this check causes immediate failure. | ||
*/ | ||
const std::optional<CFeeRate> m_client_maxfeerate; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a followup, perhaps we can reorder the ATMPArgs
members for better struct-packing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the current memory layout requires 56 bytes for an ATMPArgs
object:
struct ATMPArgs {
const CChainParams& m_chainparams; // 8 bytes
const int64_t m_accept_time; // 8 bytes
const bool m_bypass_limits; // 1 bytes
// 7 bytes padding
std::vector<COutPoint>& m_coins_to_uncache; // 8 bytes
const bool m_test_accept; // 1 bytes
const bool m_allow_replacement; // 1 bytes
const bool m_package_submission; // 1 bytes
const bool m_package_feerates; // 1 bytes
// 4 bytes padding
const std::optional<CFeeRate> m_client_maxfeerate; // 9 bytes
// 7 bytes padding
}; // TOTAL: 7 * 8 = 56 bytes
With optimal (?) member ordering, we could get this down to 48 bytes, I think:
struct ATMPArgs {
const CChainParams& m_chainparams; // 8 bytes
const int64_t m_accept_time; // 8 bytes
std::vector<COutPoint>& m_coins_to_uncache; // 8 bytes
const std::optional<CFeeRate> m_client_maxfeerate; // 9 bytes
// 7 bytes padding
const bool m_bypass_limits; // 1 bytes
const bool m_test_accept; // 1 bytes
const bool m_allow_replacement; // 1 bytes
const bool m_package_submission; // 1 bytes
const bool m_package_feerates; // 1 bytes
// 3 bytes padding
}; // TOTAL: 6 * 8 bytes = 48 bytes
However, given that afaik we generally only spawn one ATMPArgs
at the same time, I don't think we should change the member ordering just for memory footprint here, and it's fine to leave as is? Likewise, adding actual struct packing (e.g. with #pragma) almost certainly doesn't seem worth the performance trade-off (didn't run any benchmarks).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, thanks @stickies-v! Agree it's not really worth it when we only ever have 1 at a time.
@@ -823,6 +824,14 @@ static RPCHelpMan submitpackage() | |||
{"rawtx", RPCArg::Type::STR_HEX, RPCArg::Optional::OMITTED, ""}, | |||
}, | |||
}, | |||
{"maxfeerate", RPCArg::Type::AMOUNT, RPCArg::Default{FormatMoney(DEFAULT_MAX_RAW_TX_FEE_RATE.GetFeePerK())}, | |||
"Reject transactions whose fee rate is higher than the specified value, expressed in " + CURRENCY_UNIT + | |||
"/kvB.\nFee rates larger than 1BTC/kvB are rejected.\nSet to 0 to accept any fee rate."}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR isn't changing the current default, so imo it's best to open a separate issue to discuss it. This isn't looking at a complete package btw
And thread the feerate value through ProcessNewPackage to reject individual transactions that exceed the given feerate. This allows subpackage processing, and is compatible with future package RBF work.
b6fe346
to
38f70ba
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK 38f70ba with some non-blocking nits
* @returns a PackageMempoolAcceptResult which includes a MempoolAcceptResult for each transaction. | ||
* If a transaction fails, validation will exit early and some results may be missing. It is also | ||
* possible for the package to be partially submitted. | ||
*/ | ||
PackageMempoolAcceptResult ProcessNewPackage(Chainstate& active_chainstate, CTxMemPool& pool, | ||
const Package& txns, bool test_accept) | ||
const Package& txns, bool test_accept, std::optional<CFeeRate> max_sane_feerate) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this one didn't get changed to client_max_feerate
(nit)
}; | ||
} | ||
|
||
/** Parameters for child-with-unconfirmed-parents package validation. */ | ||
static ATMPArgs PackageChildWithParents(const CChainParams& chainparams, int64_t accept_time, | ||
std::vector<COutPoint>& coins_to_uncache) { | ||
std::vector<COutPoint>& coins_to_uncache, std::optional<CFeeRate>& client_maxfeerate) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should use const reference
* @returns a PackageMempoolAcceptResult which includes a MempoolAcceptResult for each transaction. | ||
* If a transaction fails, validation will exit early and some results may be missing. It is also | ||
* possible for the package to be partially submitted. | ||
*/ | ||
PackageMempoolAcceptResult ProcessNewPackage(Chainstate& active_chainstate, CTxMemPool& pool, | ||
const Package& txns, bool test_accept) | ||
const Package& txns, bool test_accept, std::optional<CFeeRate> max_sane_feerate) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Seems like you are passing by reference in most places, which makes sense to me
const Package& txns, bool test_accept, std::optional<CFeeRate> max_sane_feerate) | |
const Package& txns, bool test_accept, const std::optional<CFeeRate>& max_sane_feerate) |
@glozow I'll address these if I touch the commits again |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, code review ACK 38f70ba
Re-ACK 38f70ba 👍🏾 |
7b29119 use const ref for client_maxfeerate (Greg Sanders) f10fd07 scripted-diff: Rename max_sane_feerate to client_maxfeerate (Greg Sanders) Pull request description: Follow-ups to #28950 ACKs for top commit: glozow: utACK 7b29119 stickies-v: ACK 7b29119 Tree-SHA512: b9e13509c6e9d7c08aa9d4e759f9707004c1c7b8f3e521fe2ec0037160b87c7fb02528966b9f26eaca6291621df9300e84b5aec66dbc2e97d13bf2f3cd7f979c
… as client_maxfeerate failure 4ba1d0b fuzz: Add coverage for client_maxfeerate (Greg Sanders) 91d7d8f AcceptMultipleTransactions: Fix workspace client_maxfeerate (Greg Sanders) f3aa5bd fill_mempool: assertions and docsctring update (Greg Sanders) a3da63e Move fill_mempool to util function (Greg Sanders) 73b68bd fill_mempool: remove subtest-specific comment (Greg Sanders) Pull request description: Bug causes an `Assume()` failure due to the expectation that the individual result should be invalid when done over `submitpackage` via rpc. Bug introduced by #28950 , and I discovered it rebasing #28984 since it's easier to hit in that test scenario. Tests in place were only checking `AcceptSingleTransaction`-level checks due to package evaluation only triggering when minfee is too high for the parent transaction. Added test along with fix, moving the fill_mempool utility into a common area for re-use. ACKs for top commit: glozow: reACK 4ba1d0b theStack: ACK 4ba1d0b ismaelsadeeq: re-ACK 4ba1d0b via [diff](https://github.com/bitcoin/bitcoin/compare/4fe7d150eb3c85a6597d8fc910fe1490358197ad..4ba1d0b55339c3ea90e2bcd64662a06f0f90dd46) Tree-SHA512: 3729bdf7f25d04e232f173ccee04ddbb2afdaafa3d04292a01cecf58fb11b3b2bc133e8490277f1a67622b62d17929c242dc980f9bb647896beea4332ee35306
Resolves #28949
I couldn't manage to do it very cleanly outside of (sub)package evaluation itself, since it would change the current interface very heavily. Instead I threaded through the max fee argument and used that directly via ATMPArgs. From that perspective, this is somewhat a reversion from #19339. In a post-cluster mempool world, these checks could be consolidated to right after the given (ancestor) package is linearized/chunked, by just checking the feerate of the top chunk and rejecting the submission entirely if the top chunk is too high.
The implication here is that subpackages can be submitted to the mempool prior to hitting this new fee-based error condition.