Skip to content

Commit 6fbe4e4

Browse files
committed
Merge remote-tracking branch 'origin/next' into ad/misc-kind-fixups
# Please enter a commit message to explain why this merge is necessary, # especially if it merges an updated upstream into a topic branch. # # Lines starting with '#' will be ignored, and an empty message aborts # the commit.
2 parents 1a03b30 + 02d01bb commit 6fbe4e4

File tree

27 files changed

+170
-78
lines changed

27 files changed

+170
-78
lines changed

.test_patterns.yml

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -343,12 +343,6 @@ tests:
343343
owners:
344344
- *adam
345345

346-
- regex: "validator-ha-signer/src/db/postgres.test.ts"
347-
error_regex: "FailureMessage Object"
348-
owners:
349-
- *spyros
350-
- *alex
351-
352346
# http://ci.aztec-labs.com/e8228a36afda93b8
353347
# Test passed but there was an error on stopping
354348
- regex: "playground/scripts/run_test.sh"

docs/docs-developers/docs/resources/migration_notes.md

Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,78 @@ Aztec is in active development. Each version may introduce breaking changes that
99

1010
## TBD
1111

12+
### [Aztec.js] Transaction sending API redesign
13+
14+
The old chained `.send().wait()` pattern has been replaced with a single `.send(options)` call that handles both sending and waiting.
15+
16+
```diff
17+
+ import { Contract, NO_WAIT } from '@aztec/aztec.js/contracts';
18+
19+
- const receipt = await contract.methods.transfer(recipient, amount).send().wait();
20+
21+
// Send now waits by default
22+
+ const receipt = await contract.methods.transfer(recipient, amount).send({ from: sender });
23+
24+
// getTxHash() would confusingly send the transaction too
25+
- const txHash = await contract.methods.transfer(recipient, amount).send().getTxHash();
26+
27+
// NO_WAIT to send the transaction and return TxHash immediately
28+
+ const txHash = await contract.methods.transfer(recipient, amount).send({
29+
+ from: sender,
30+
+ wait: NO_WAIT
31+
+ });
32+
```
33+
34+
#### Deployment changes
35+
36+
The old `.send().deployed()` method has been removed. Deployments now return the contract instance by default, or you can request the full receipt with `returnReceipt: true`:
37+
38+
```diff
39+
- const contract = await MyContract.deploy(wallet, ...args).send().deployed();
40+
- const { contract, instance } = await MyContract.deploy(wallet, ...args).send().wait();
41+
42+
+ const contract = await MyContract.deploy(wallet, ...args).send({ from: deployer });
43+
44+
+ const { contract, instance } = await MyContract.deploy(wallet, ...args).send({
45+
+ from: deployer,
46+
+ wait: { returnReceipt: true },
47+
+ });
48+
```
49+
50+
#### Breaking changes to `Wallet` interface
51+
52+
`getTxReceipt()` has been removed from the interface.
53+
54+
`sendTx` method signature has changed to support the new wait behavior:
55+
56+
```diff
57+
- sendTx(payload: ExecutionPayload, options: SendOptions): Promise<TxReceipt>
58+
59+
+ sendTx<W extends InteractionWaitOptions = undefined>(
60+
+ payload: ExecutionPayload,
61+
+ options: SendOptions<W>
62+
+ ): Promise<SendReturn<W>>
63+
```
64+
65+
#### Manual waiting with `waitForTx`
66+
67+
When using `NO_WAIT` to send transactions, you can manually wait for confirmation using the `waitForTx` utility:
68+
69+
```typescript
70+
import { waitForTx } from "@aztec/aztec.js/node";
71+
72+
const txHash = await contract.methods.transfer(recipient, amount).send({
73+
from: sender,
74+
wait: NO_WAIT,
75+
});
76+
77+
const receipt = await waitForTx(node, txHash, {
78+
timeout: 60000, // Optional: timeout in ms
79+
interval: 1000, // Optional: polling interval in ms
80+
dontThrowOnRevert: true, // Optional: return receipt even if tx reverted
81+
});
82+
```
83+
1284
### [aztec-nr] Removal of intermediate modules
1385

1486
Lots of unnecessary modules have been removed from the API, making imports shorter. These are the modules that contain just a single struct, in which the module has the same name as the struct.

l1-contracts/scripts/run_rollup_upgrade.sh

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,14 @@ registry_address="${1:?registry_address is required}"
1919
echo "=== Deploying rollup upgrade ==="
2020
echo "Registry: $registry_address"
2121

22+
# Force rebuild with production profile for mainnet/sepolia to ensure correct BlobLib
23+
# This covers edge cases where cached code from non-production builds may remain
24+
if [ "${L1_CHAIN_ID:-}" = "1" ] || [ "${L1_CHAIN_ID:-}" = "11155111" ]; then
25+
echo "Mainnet/Sepolia detected - forcing production build..."
26+
FOUNDRY_PROFILE=production forge build script/deploy/DeployRollupForUpgrade.s.sol --force
27+
export FOUNDRY_PROFILE=production
28+
fi
29+
2230
REGISTRY_ADDRESS="$registry_address" \
2331
REAL_VERIFIER="${REAL_VERIFIER:-true}" \
2432
forge script script/deploy/DeployRollupForUpgrade.s.sol:DeployRollupForUpgrade \

spartan/scripts/deploy_network.sh

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -466,9 +466,11 @@ AZTEC_INFRA_START=$(date +%s)
466466
DEPLOY_AZTEC_INFRA_DIR="${SCRIPT_DIR}/../terraform/deploy-aztec-infra"
467467
"${SCRIPT_DIR}/override_terraform_backend.sh" "${DEPLOY_AZTEC_INFRA_DIR}" "${CLUSTER}" "${BASE_STATE_PATH}/deploy-aztec-infra"
468468

469-
# Gate NodePort based on cluster (true for kind, false for GKE)
469+
# Gate NodePort based on cluster
470+
# KIND doesn't need NodePort (local cluster, no external access needed)
471+
# GKE uses public IP instead of NodePort
470472
if [[ "${CLUSTER}" == "kind" ]]; then
471-
P2P_NODEPORT_ENABLED=true
473+
P2P_NODEPORT_ENABLED=false
472474
P2P_PUBLIC_IP=false
473475
else
474476
P2P_NODEPORT_ENABLED=false

spartan/scripts/deploy_rollup_upgrade.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,7 @@ if [[ -z "${L1_CHAIN_ID:-}" ]]; then
4040
network_defaults="${repo_root}/spartan/environments/network-defaults.yml"
4141
L1_CHAIN_ID=$(yq "explode(.) | .networks.$NETWORK.L1_CHAIN_ID" "$network_defaults")
4242
fi
43+
export L1_CHAIN_ID
4344

4445
log "Starting rollup upgrade deployment"
4546
log "L1 Chain ID: $L1_CHAIN_ID"

spartan/scripts/test_kind.sh

Lines changed: 19 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -46,16 +46,28 @@ if [ "$install_metrics" = "true" ]; then
4646
./bootstrap.sh metrics-kind
4747
fi
4848

49-
# Capture logs
50-
mkdir -p scripts/logs
49+
# Capture pod logs via stern, streaming directly to cache_log
5150
stern_pid=""
51+
52+
function start_stern {
53+
if command -v stern &>/dev/null; then
54+
echo "Starting stern log capture for namespace $namespace (streaming to cache_log)..."
55+
# Pipe stern directly to cache_log for live streaming to Redis
56+
stern ".*" -n "$namespace" --since=1s 2>&1 | cache_log "kind-$namespace-pods" &
57+
stern_pid=$!
58+
echo "Stern streaming started with PID $stern_pid"
59+
else
60+
echo "Warning: stern not installed, pod logs will not be captured"
61+
fi
62+
}
63+
5264
function cleanup {
5365
set +e
5466
if [ -n "$stern_pid" ]; then
67+
echo "Stopping stern (PID $stern_pid)..."
5568
kill "$stern_pid" 2>/dev/null || true
69+
wait "$stern_pid" 2>/dev/null || true
5670
fi
57-
# Upload logs
58-
(cat "scripts/logs/kind-$namespace.log" 2>/dev/null || true) | cache_log "kind test $test_file" || true
5971

6072
if [ "$fresh_install" = "true" ]; then
6173
# Use kind_teardown.sh for comprehensive cleanup (namespace + terraform state)
@@ -64,12 +76,6 @@ function cleanup {
6476
}
6577
trap cleanup EXIT INT TERM
6678

67-
# Start stern to capture logs
68-
if command -v stern &>/dev/null; then
69-
stern ".*" -n "$namespace" > "scripts/logs/kind-$namespace.log" 2>&1 &
70-
stern_pid=$!
71-
fi
72-
7379
# Deploy the network
7480
echo "Deploying network to KIND namespace: $namespace"
7581

@@ -81,6 +87,9 @@ fi
8187
# Run the deployment
8288
./scripts/deploy_network.sh
8389

90+
# Start stern AFTER namespace exists to capture pod logs
91+
start_stern
92+
8493
# Wait for validator pods to be ready
8594
echo "Waiting for validator pods to be ready..."
8695
kubectl wait pod -l app.kubernetes.io/component=sequencer-node --for=condition=Ready -n "$namespace" --timeout=15m || true

yarn-project/archiver/src/store/kv_archiver_store.test.ts

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -335,7 +335,7 @@ describe('KVArchiverDataStore', () => {
335335
await store.addCheckpoints(publishedCheckpoints);
336336
const lastCheckpoint = publishedCheckpoints[publishedCheckpoints.length - 1];
337337
const lastBlock = lastCheckpoint.checkpoint.blocks[0];
338-
const blockHash = await lastBlock.header.hash();
338+
const blockHash = (await lastBlock.header.hash()).toField();
339339
const archive = lastBlock.archive.root;
340340

341341
// Verify block and header exist before removing
@@ -639,7 +639,7 @@ describe('KVArchiverDataStore', () => {
639639
// Check each block by its hash
640640
for (let i = 0; i < checkpoint.checkpoint.blocks.length; i++) {
641641
const block = checkpoint.checkpoint.blocks[i];
642-
const blockHash = await block.header.hash();
642+
const blockHash = (await block.header.hash()).toField();
643643
const retrievedBlock = await store.getCheckpointedBlockByHash(blockHash);
644644

645645
expect(retrievedBlock).toBeDefined();
@@ -797,8 +797,8 @@ describe('KVArchiverDataStore', () => {
797797
await store.addProposedBlocks([block1, block2]);
798798

799799
// getBlockByHash should work for uncheckpointed blocks
800-
const hash1 = await block1.header.hash();
801-
const hash2 = await block2.header.hash();
800+
const hash1 = (await block1.header.hash()).toField();
801+
const hash2 = (await block2.header.hash()).toField();
802802

803803
const retrieved1 = await store.getBlockByHash(hash1);
804804
expect(retrieved1!.equals(block1)).toBe(true);
@@ -874,7 +874,7 @@ describe('KVArchiverDataStore', () => {
874874
});
875875
await store.addProposedBlocks([block1]);
876876

877-
const hash = await block1.header.hash();
877+
const hash = (await block1.header.hash()).toField();
878878

879879
// getCheckpointedBlockByHash should return undefined
880880
expect(await store.getCheckpointedBlockByHash(hash)).toBeUndefined();
@@ -1666,7 +1666,7 @@ describe('KVArchiverDataStore', () => {
16661666
it('retrieves a block by its hash', async () => {
16671667
const expectedCheckpoint = publishedCheckpoints[5];
16681668
const expectedBlock = expectedCheckpoint.checkpoint.blocks[0];
1669-
const blockHash = await expectedBlock.header.hash();
1669+
const blockHash = (await expectedBlock.header.hash()).toField();
16701670
const retrievedBlock = await store.getCheckpointedBlockByHash(blockHash);
16711671

16721672
expect(retrievedBlock).toBeDefined();
@@ -1707,7 +1707,7 @@ describe('KVArchiverDataStore', () => {
17071707

17081708
it('retrieves a block header by its hash', async () => {
17091709
const expectedBlock = publishedCheckpoints[7].checkpoint.blocks[0];
1710-
const blockHash = await expectedBlock.header.hash();
1710+
const blockHash = (await expectedBlock.header.hash()).toField();
17111711
const retrievedHeader = await store.getBlockHeaderByHash(blockHash);
17121712

17131713
expect(retrievedHeader).toBeDefined();
@@ -1833,7 +1833,7 @@ describe('KVArchiverDataStore', () => {
18331833
const expectedTx: IndexedTxEffect = {
18341834
data,
18351835
l2BlockNumber: block.number,
1836-
l2BlockHash: L2BlockHash.fromField(await block.header.hash()),
1836+
l2BlockHash: await block.header.hash(),
18371837
txIndexInBlock,
18381838
};
18391839
const actualTx = await store.getTxEffect(data.txHash);
@@ -2732,7 +2732,7 @@ describe('KVArchiverDataStore', () => {
27322732

27332733
it('returns block hash on public log ids', async () => {
27342734
const targetBlock = publishedCheckpoints[0].checkpoint.blocks[0];
2735-
const expectedBlockHash = L2BlockHash.fromField(await targetBlock.header.hash());
2735+
const expectedBlockHash = await targetBlock.header.hash();
27362736

27372737
const logs = (await store.getPublicLogs({ fromBlock: targetBlock.number, toBlock: targetBlock.number + 1 })).logs;
27382738

@@ -2790,7 +2790,7 @@ describe('KVArchiverDataStore', () => {
27902790
const targetTxIndex = randomInt(getTxsPerBlock(targetBlock));
27912791
const numLogsInTx = targetBlock.body.txEffects[targetTxIndex].publicLogs.length;
27922792
const targetLogIndex = numLogsInTx > 0 ? randomInt(numLogsInTx) : 0;
2793-
const targetBlockHash = L2BlockHash.fromField(await targetBlock.header.hash());
2793+
const targetBlockHash = await targetBlock.header.hash();
27942794

27952795
const afterLog = new LogId(
27962796
BlockNumber(targetBlockIndex + INITIAL_L2_BLOCK_NUM),
@@ -2882,7 +2882,7 @@ describe('KVArchiverDataStore', () => {
28822882
const targetTxIndex = randomInt(getTxsPerBlock(targetBlock));
28832883
const numLogsInTx = targetBlock.body.txEffects[targetTxIndex].publicLogs.length;
28842884
const targetLogIndex = numLogsInTx > 0 ? randomInt(numLogsInTx) : 0;
2885-
const targetBlockHash = L2BlockHash.fromField(await targetBlock.header.hash());
2885+
const targetBlockHash = await targetBlock.header.hash();
28862886

28872887
const afterLog = new LogId(
28882888
BlockNumber(targetBlockIndex + INITIAL_L2_BLOCK_NUM),
@@ -2935,7 +2935,7 @@ describe('KVArchiverDataStore', () => {
29352935
expect(result.logs).toHaveLength(1);
29362936

29372937
const [{ id, log }] = result.logs;
2938-
const expectedBlockHash = L2BlockHash.fromField(await targetBlock.header.hash());
2938+
const expectedBlockHash = await targetBlock.header.hash();
29392939

29402940
expect(id.blockHash.equals(expectedBlockHash)).toBe(true);
29412941
expect(id.blockNumber).toEqual(targetBlock.number);
@@ -3320,7 +3320,7 @@ describe('KVArchiverDataStore', () => {
33203320
await store.addProposedBlocks([block1, block2]);
33213321

33223322
// Verify block2 is retrievable by hash and archive before removal
3323-
const block2Hash = await block2.header.hash();
3323+
const block2Hash = (await block2.header.hash()).toField();
33243324
const block2Archive = block2.archive.root;
33253325

33263326
expect(await store.getBlockByHash(block2Hash)).toBeDefined();
@@ -3346,7 +3346,7 @@ describe('KVArchiverDataStore', () => {
33463346
}
33473347

33483348
// Verify block1's data is still intact
3349-
const block1Hash = await block1.header.hash();
3349+
const block1Hash = (await block1.header.hash()).toField();
33503350
const block1Archive = block1.archive.root;
33513351

33523352
expect(await store.getBlockByHash(block1Hash)).toBeDefined();

yarn-project/aztec-node/src/aztec-node/server.ts

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1411,11 +1411,7 @@ export class AztecNodeService implements AztecNode, AztecNodeAdmin, Traceable {
14111411

14121412
#getInitialHeaderHash(): Promise<L2BlockHash> {
14131413
if (!this.initialHeaderHashPromise) {
1414-
this.initialHeaderHashPromise = this.worldStateSynchronizer
1415-
.getCommitted()
1416-
.getInitialHeader()
1417-
.hash()
1418-
.then(hash => L2BlockHash.fromField(hash));
1414+
this.initialHeaderHashPromise = this.worldStateSynchronizer.getCommitted().getInitialHeader().hash();
14191415
}
14201416
return this.initialHeaderHashPromise;
14211417
}

yarn-project/end-to-end/src/e2e_simple.test.ts

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ import type { AztecNode } from '@aztec/aztec.js/node';
77
import type { Wallet } from '@aztec/aztec.js/wallet';
88
import { BlockNumber } from '@aztec/foundation/branded-types';
99
import { StatefulTestContractArtifact } from '@aztec/noir-test-contracts.js/StatefulTest';
10-
import { L2BlockHash } from '@aztec/stdlib/block';
1110

1211
import { jest } from '@jest/globals';
1312
import 'jest-extended';
@@ -57,15 +56,15 @@ describe('e2e_simple', () => {
5756
const initialHeader = await aztecNode.getBlockHeader(BlockNumber.ZERO);
5857
expect(initialHeader).toBeDefined();
5958
const initialHeaderHash = await initialHeader!.hash();
60-
const initialBlockByHash = await aztecNode.getBlock(L2BlockHash.fromField(initialHeaderHash));
59+
const initialBlockByHash = await aztecNode.getBlock(initialHeaderHash);
6160
expect(initialBlockByHash).toBeDefined();
6261
const initialBlockHash = await initialBlockByHash!.hash();
63-
expect(initialBlockHash.equals(initialHeaderHash)).toBeTrue();
62+
expect(initialBlockHash.equals(initialHeaderHash.toField())).toBeTrue();
6463
expect(initialBlockByHash?.body.txEffects.length).toBe(0);
6564
const initialBlockByNumber = await aztecNode.getBlock(BlockNumber.ZERO);
6665
expect(initialBlockByNumber).toBeDefined();
6766
const initialBlockByNumberHash = await initialBlockByNumber!.hash();
68-
expect(initialBlockByNumberHash.equals(initialHeaderHash)).toBeTrue();
67+
expect(initialBlockByNumberHash.equals(initialHeaderHash.toField())).toBeTrue();
6968
expect(initialBlockByNumber?.body.txEffects.length).toBe(0);
7069
});
7170

yarn-project/p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -322,9 +322,10 @@ describe('KV TX pool', () => {
322322
// modify tx1 to return no archive indices
323323
tx1.data.constants.anchorBlockHeader.globalVariables.blockNumber = BlockNumber(1);
324324
const tx1HeaderHash = await tx1.data.constants.anchorBlockHeader.hash();
325+
const tx1HeaderHashFr = tx1HeaderHash.toField();
325326
db.findLeafIndices.mockImplementation((tree, leaves) => {
326327
if (tree === MerkleTreeId.ARCHIVE) {
327-
return Promise.resolve((leaves as Fr[]).map(l => (l.equals(tx1HeaderHash) ? undefined : 1n)));
328+
return Promise.resolve((leaves as Fr[]).map(l => (l.equals(tx1HeaderHashFr) ? undefined : 1n)));
328329
}
329330
return Promise.resolve([]);
330331
});

0 commit comments

Comments
 (0)