paritytech/parity-scale-codec

Backwards incompatible change in patch version

prestwich opened this issue ยท 18 comments

Updating from v3.6.4 -> v3.6.8 introduced a backwards incompatible change caught by ruint ci here.

Change was in this commit. Introduced in #512

Please consider yanking 3.6.7 & 3.6.8

ggwpez commented
bkchr commented

Yeah that was my fault ๐Ÿ™ˆ #512 (comment)

Sorry!

No worries, it's a pretty straightforward fix. Would help to know whether 3.6.8 will also be yanked, or if we should pin the dependency to <3.6.8 in the next ruint release

@prestwich this ruint PR should fix it, not super familiar with the project, so reviews are welcome ๐Ÿ™
recmo/uint#313

bkchr commented

Yeah maybe we should yank and release 4.0.0. Or does anyone has better ideas?

Yanked it already to be safe, what about just fixing uint and then re-submitting?
Do you expect more projects that would start failing?

WRT fixing ruint, we can't change the dep specifier in the currently released versions. So if this change gets released outside a major, anyone using current versions of ruint will get broken. Updating ruint could solve that for most users, but we have at least one user stuck on ruint@1.8.0 (in solana, which is significantly behind mainline compiler versions, too old for our current MSRV) who would have no upgrade path and would have to pin the parity-scale-codec crate. Not the end of the world

ruint policy is also to support multiple dep major versions (as dropping support would be . so if this becomes parity-scale-codec@4, we would introudce a parity-scale-codec-v4 feature, rather than updating the existing support

So overall, we'd prefer not to update our existing support to the breaking change. If it becomes a major, we're happy to add the new support alongside the existing support

This is your lib, so let me know what you want to do here. Appreciate the help :)

so, we should release v4.0.0 with the breaking change and ruint can update later on using the PR I submitted
Unless we want to rollback and go with the initial suggested commit of #508, which is not as nice but should not break downstream dependencies

Yep, exactly. Let me know what ya'll end up deciding and I'll coordinate any ruint changes

bkchr commented

@pgherveou how much do you need Compact support for MaxEncodedLen? And could you maybe not use the compact attribute? This maybe already makes it work right now? Otherwise we can use your "less optimal" solution for now. There exists some issues that we should tackle before we go to 4.0, we could also solve them and then bump.

We have this function here read_sandbox_memory_as that will read up to MaxEncodedLen bytes to read and decode types from the contract memory. This could fail with a max_encode_len fn that returns a too small value for struct with compact fields.

It looks like we don't use Compact type now, and arbitrary contract storage do not rely on this method, so we should be safe for now, but I would like to read existing types that do define Compact field in the future.

https://github.com/paritytech/polkadot-sdk/blob/f7c95c5fc12d3abde866bc1ee106909f950e5427/substrate/frame/contracts/src/wasm/runtime.rs#L627-L638

bkchr commented

Without wanting to look to deep into this, but isn't read_sandbox_memory_as trusted code that is being used by pallet-contracts from the runtime side? Why do we need to use there the MaxEncodedLen at all?

it is invoked by the runtime when the contracts invoke a host function from pallet-contracts.
the contract pass a pointer from it's memory and we read from the pointer up to max_encoded_len of the type we need to read and decode, using decode_with_depth_limit

bkchr commented

But why not just directly decode from the memory?

That's what we do, we decode the type from the contract's memory, we just need to make a bound check to make sure that we are not reading outside of the sandbox memory

bkchr commented

But at the point you are reading it, you are in trusted code? And you are declaring the type and not the contract. I really don't get why you need MaxEncodedLen for your use case.

(call $seal_terminate
	(i32.const 65536)  ;; Pointer to "account" address (out of bound).
)

Here reading AccountId::max_encoded_len() bytes from the account addr specified will trigger an outOfBound error.

bkchr commented

You can just change this code here to this:

		let mut bound_checked = memory
			.get(ptr..)
			.ok_or_else(|| Error::<E::T>::OutOfBounds)?;
		let decoded = D::decode_with_depth_limit(MAX_DECODE_NESTING, &mut bound_checked)

Decode will return an error if there aren't enough bytes in the data to decode the type.