Skip to content

Issue 1711 - Implement adjacency map computation for MPI mesher#1756

Draft
Rohit-Kakodkar wants to merge 2 commits intodevelfrom
issue-1711-update-3d-mesher
Draft

Issue 1711 - Implement adjacency map computation for MPI mesher#1756
Rohit-Kakodkar wants to merge 2 commits intodevelfrom
issue-1711-update-3d-mesher

Conversation

@Rohit-Kakodkar
Copy link
Collaborator

Description

Please describe the changes/features in this pull request.

  • Add routines to communicate data
  • Add routines to compute MPI adjacencies
  • Update the save databases functions to write MPI adajcencies

Issue Number

If there is an issue created for these changes, link it here

Checklist

Please make sure to check developer documentation on specfem docs.

  • I ran the code through pre-commit to check style
  • THE DOCUMENTATION BUILDS WITHOUT WARNINGS/ERRORS
  • I have added labels to the PR (see right hand side of the PR page)
  • My code passes all the integration tests
  • I have added sufficient unittests to test my changes
  • I have added/updated documentation for the changes I am proposing
  • I have updated CMakeLists to ensure my code builds
  • My code builds across all platforms

- [x] Add routines to communicate data
- [x] Add routines to compute MPI adjacencies
- [x] Update the save databases functions to write MPI adajcencies
Copilot AI review requested due to automatic review settings March 25, 2026 18:29
@Rohit-Kakodkar Rohit-Kakodkar marked this pull request as draft March 25, 2026 18:33
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Implements cross-rank adjacency discovery for the MPI meshfem3D mesher by exchanging interface element metadata/coordinates and writing resulting MPI adjacencies into the mesh database output.

Changes:

  • Add new MPI buffer exchange + coordinate-based matching to compute cross-MPI element adjacencies.
  • Invoke MPI exchange + adjacency computation during mesh creation.
  • Extend database writer to emit an MPI adjacency list after the local adjacency list.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 6 comments.

File Description
fortran/meshfem3d/meshfem3D/assemble_MPI.f90 New module + routines to exchange interface data and compute mpi_adjacency via coordinate matching.
fortran/meshfem3d/meshfem3D/create_meshfem_mesh.f90 Calls the new MPI exchange and adjacency computation during mesh creation.
fortran/meshfem3d/meshfem3D/save_databases.F90 Writes MPI adjacency list into the database output (in addition to local adjacency list).
fortran/meshfem3d/meshfem3D/adjacency_graph.f90 Removes NPROC=1 restriction so local adjacency can still be computed in MPI runs.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.


!! set up ibool_corners for each interface type (W, E, S, N, NW, NE, SE, SW)

call assemble_ibool_corners(ibool_corners_W, ibool_corners_E, ibool_corners_S, &
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assemble_MPI calls assemble_ibool_corners(...), but this subroutine is not defined anywhere in this file; the only matching routine present is set_ibool_corners(...). As written, this will not compile—either rename the call to set_ibool_corners or add/rename the missing subroutine consistently.

Suggested change
call assemble_ibool_corners(ibool_corners_W, ibool_corners_E, ibool_corners_S, &
call set_ibool_corners(ibool_corners_W, ibool_corners_E, ibool_corners_S, &

Copilot uses AI. Check for mistakes.
Comment on lines +1327 to +1353
allocate(send_coords(nsend, 4, NDIM))
allocate(recv_coords(nrecv, 4, NDIM))
allocate(send_iface(nsend, 2))
allocate(recv_iface(nrecv, 2))

call get_direction_buffers(dir, nsend, nrecv, &
send_coords, recv_coords, send_iface, recv_iface)

do isend = 1, nsend
found = .false.
do irecv = 1, nrecv
if (corners_match(send_coords(isend,:,:), &
recv_coords(irecv,:,:), ncorners)) then
count = count + 1
found = .true.
exit
endif
end do
if (.not. found) then
write(*,*) 'Error: No matching element found for local element ', &
send_iface(isend, 1), ' in direction ', dir
stop 'MPI adjacency error: unmatched interface element'
endif
end do

deallocate(send_coords, recv_coords, send_iface, recv_iface)

Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This introduces O(nsend * nrecv) brute-force matching (nested loops over send/recv elements) for every active interface. For large partitions this can dominate meshing time; consider building a lookup (e.g., hash key from quantized/sorted corner coordinates) from the received buffer to achieve ~O(nsend + nrecv) matching and detect duplicates explicitly.

Suggested change
allocate(send_coords(nsend, 4, NDIM))
allocate(recv_coords(nrecv, 4, NDIM))
allocate(send_iface(nsend, 2))
allocate(recv_iface(nrecv, 2))
call get_direction_buffers(dir, nsend, nrecv, &
send_coords, recv_coords, send_iface, recv_iface)
do isend = 1, nsend
found = .false.
do irecv = 1, nrecv
if (corners_match(send_coords(isend,:,:), &
recv_coords(irecv,:,:), ncorners)) then
count = count + 1
found = .true.
exit
endif
end do
if (.not. found) then
write(*,*) 'Error: No matching element found for local element ', &
send_iface(isend, 1), ' in direction ', dir
stop 'MPI adjacency error: unmatched interface element'
endif
end do
deallocate(send_coords, recv_coords, send_iface, recv_iface)
! Arrays for hash-based lookup on received elements
integer, allocatable :: recv_hash(:), recv_order(:)
logical, allocatable :: recv_used(:)
integer :: c, d
integer :: send_hash, h
integer :: idx, first_idx, nmatch_candidates
double precision, parameter :: HASH_SCALE = 1.0d6
allocate(send_coords(nsend, 4, NDIM))
allocate(recv_coords(nrecv, 4, NDIM))
allocate(send_iface(nsend, 2))
allocate(recv_iface(nsend, 2))
call get_direction_buffers(dir, nsend, nrecv, &
send_coords, recv_coords, send_iface, recv_iface)
! Allocate and build hash/index arrays for received elements
if (nrecv > 0) then
allocate(recv_hash(nrecv))
allocate(recv_order(nrecv))
allocate(recv_used(nrecv))
do irecv = 1, nrecv
recv_order(irecv) = irecv
recv_used(irecv) = .false.
h = 0
do c = 1, ncorners
do d = 1, NDIM
h = h + int(recv_coords(irecv, c, d) * HASH_SCALE)
end do
end do
recv_hash(irecv) = h
end do
call quicksort_indices(recv_hash, recv_order, 1, nrecv)
else
! No received elements: any send elements will fail to find matches
allocate(recv_hash(0))
allocate(recv_order(0))
allocate(recv_used(0))
end if
! For each local send element, find its match using hash lookup
do isend = 1, nsend
found = .false.
! Compute hash for this send element
h = 0
do c = 1, ncorners
do d = 1, NDIM
h = h + int(send_coords(isend, c, d) * HASH_SCALE)
end do
end do
send_hash = h
! Binary search first occurrence of send_hash in sorted recv_hash
if (nrecv > 0) then
first_idx = binary_search_first(recv_hash, nrecv, send_hash)
else
first_idx = 0
end if
if (first_idx > 0) then
! Scan forward over all candidates with matching hash
idx = first_idx
nmatch_candidates = 0
do while (idx <= nrecv .and. recv_hash(idx) == send_hash)
if (.not. recv_used(idx)) then
irecv = recv_order(idx)
if (corners_match(send_coords(isend,:,:), &
& recv_coords(irecv,:,:), ncorners)) then
count = count + 1
found = .true.
recv_used(idx) = .true.
exit
end if
end if
idx = idx + 1
end do
end if
if (.not. found) then
write(*,*) 'Error: No matching element found for local element ', &
send_iface(isend, 1), ' in direction ', dir
stop 'MPI adjacency error: unmatched interface element'
end if
end do
deallocate(send_coords, recv_coords, send_iface, recv_iface)
if (allocated(recv_hash)) deallocate(recv_hash)
if (allocated(recv_order)) deallocate(recv_order)
if (allocated(recv_used)) deallocate(recv_used)
contains
recursive subroutine quicksort_indices(keys, idx, left, right)
integer, intent(inout) :: keys(:)
integer, intent(inout) :: idx(:)
integer, intent(in) :: left, right
integer :: i, j, pivot, tmpk, tmpi
if (left >= right) return
i = left
j = right
pivot = keys((left + right) / 2)
do
do while (keys(i) < pivot)
i = i + 1
end do
do while (keys(j) > pivot)
j = j - 1
end do
if (i <= j) then
tmpk = keys(i)
keys(i) = keys(j)
keys(j) = tmpk
tmpi = idx(i)
idx(i) = idx(j)
idx(j) = tmpi
i = i + 1
j = j - 1
end if
if (i > j) exit
end do
if (left < j) call quicksort_indices(keys, idx, left, j)
if (i < right) call quicksort_indices(keys, idx, i, right)
end subroutine quicksort_indices
integer function binary_search_first(keys, n, value) result(pos)
integer, intent(in) :: keys(:)
integer, intent(in) :: n, value
integer :: lo, hi, mid
pos = 0
if (n <= 0) return
lo = 1
hi = n
do while (lo <= hi)
mid = (lo + hi) / 2
if (keys(mid) >= value) then
hi = mid - 1
else
lo = mid + 1
end if
end do
if (lo >= 1 .and. lo <= n .and. keys(lo) == value) then
pos = lo
else
pos = 0
end if
end function binary_search_first

Copilot uses AI. Check for mistakes.
if (num_mpi_adjacencies == 0) return

allocate(mpi_adjacency(num_mpi_adjacencies, 5), stat=ier)
if (ier /= 0) stop 'Error allocating mpi_adjacency'
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mpi_adjacency allocation failure uses a bare stop, while the rest of the MPI-related error paths use exit_MPI with a descriptive message. For consistent error handling (and to ensure clean MPI abort behavior), prefer using the same exit_MPI(myrank, ...) pattern here too.

Suggested change
if (ier /= 0) stop 'Error allocating mpi_adjacency'
if (ier /= 0) call exit_MPI(myrank, 'Error allocating mpi_adjacency')

Copilot uses AI. Check for mistakes.
Comment on lines +138 to +141
! Exchange MPI interface buffers and compute cross-MPI adjacencies
call assemble_MPI()
call compute_mpi_adjacency()

Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These new calls require the new assemble_MPI / compute_mpi_adjacency routines to be compiled and linked into the meshfem3D target. Currently fortran/meshfem3d/meshfem3D/CMakeLists.txt does not list assemble_MPI.f90, so builds will fail with unresolved symbols unless the build files are updated accordingly.

Copilot uses AI. Check for mistakes.
enddo
enddo

! Total adjacency count = local + MPI
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says "Total adjacency count = local + MPI", but total_nadj_element is computed only from adjacency_matrix (local partition) and does not include num_mpi_adjacencies. Either update the comment/variable name to reflect that this is the local adjacency count, or write the combined count if the file format expects total adjacency entries.

Suggested change
! Total adjacency count = local + MPI
! Total local adjacency count (MPI adjacencies written separately below)

Copilot uses AI. Check for mistakes.

! Write MPI adjacencies
! Format: myindex, neighbor_iproc, neighbor_index, my_connection_id, neighbor_connection_id
write(IIN_database) num_mpi_adjacencies
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

num_mpi_adjacencies is written twice in a row (once at line 637 and again at line 642) before the MPI adjacency entries. This will shift the file stream and likely break any reader expecting a single count header for the MPI adjacency list—remove the duplicate write and confirm the intended database format.

Suggested change
write(IIN_database) num_mpi_adjacencies

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants