mptensor  v0.3.0
Parallel Library for Tensor Network Methods
mptensor::mpi_wrapper Namespace Reference

Wrappers of MPI library. More...

Functions

template<typename C >
allreduce_sum (C val, const MPI_Comm &comm)
 Calculate a summation over MPI communicator. More...
 
template<typename C >
std::vector< C > allreduce_vec (const std::vector< C > &vec, const MPI_Comm &comm)
 Calculate a summation of each element of vector over MPI communicator. More...
 
template<typename C >
void send_recv_vector (const std::vector< C > &send_vec, int dest, int sendtag, std::vector< C > &recv_vec, int source, int recvtag, const MPI_Comm &comm, MPI_Status &status)
 Wrapper of MPI_Sendrecv. More...
 
template<typename C >
void alltoallv (const C *sendbuf, const int *sendcounts, const int *sdispls, C *recvbuf, const int *recvcounts, const int *rdispls, const MPI_Comm &comm)
 Wrapper of MPI_Alltoallv. More...
 
template<typename C >
void bcast (C *buffer, int count, int root, const MPI_Comm &comm)
 Wrapper of MPI_Bcast. More...
 

Detailed Description

Wrappers of MPI library.

Function Documentation

◆ allreduce_sum()

template<typename C >
C mptensor::mpi_wrapper::allreduce_sum ( val,
const MPI_Comm &  comm 
)

Calculate a summation over MPI communicator.

Parameters
[in]valvalue to be summed.
[in]commMPI communicator.
Returns
summation of val.

◆ allreduce_vec()

template<typename C >
std::vector<C> mptensor::mpi_wrapper::allreduce_vec ( const std::vector< C > &  vec,
const MPI_Comm &  comm 
)

Calculate a summation of each element of vector over MPI communicator.

Parameters
[in]vecvector to be summed.
[in]commMPI communicator.
Returns
resulted vector.

◆ alltoallv()

template<typename C >
void mptensor::mpi_wrapper::alltoallv ( const C *  sendbuf,
const int *  sendcounts,
const int *  sdispls,
C *  recvbuf,
const int *  recvcounts,
const int *  rdispls,
const MPI_Comm &  comm 
)

Wrapper of MPI_Alltoallv.

Parameters
[in]sendbufStarting address of send buffer.
[in]sendcountsInteger array, where entry i specifies the number of elements to send to rank i.
[in]sdisplsInteger array, where entry i specifies the displacement (offset from sendbuf, in units of sendtype) from which to send data to rank i.
[out]recvbufAddress of receive buffer.
[in]recvcountsInteger array, where entry j specifies the number of elements to receive from rank j.
[in]rdisplsInteger array, where entry j specifies the displacement (offset from recvbuf, in units of recvtype) to which data from rank j should be written.
[in]commCommunicator over which data is to be exchanged.

◆ bcast()

template<typename C >
void mptensor::mpi_wrapper::bcast ( C *  buffer,
int  count,
int  root,
const MPI_Comm &  comm 
)

Wrapper of MPI_Bcast.

Parameters
bufferStarting address of buffer.
countNumber of entries in buffer.
rootRank of broadcast root.
commCommunicator.

◆ send_recv_vector()

template<typename C >
void mptensor::mpi_wrapper::send_recv_vector ( const std::vector< C > &  send_vec,
int  dest,
int  sendtag,
std::vector< C > &  recv_vec,
int  source,
int  recvtag,
const MPI_Comm &  comm,
MPI_Status &  status 
)

Wrapper of MPI_Sendrecv.

Parameters
[in]send_vecsend vector
[in]destRank of destination
[in]sendtagSend tag
[out]recv_vecreceive vector
[in]sourceRank of source
[in]recvtagReceive tag
[in]commMPI Comunicator
[out]statusStatus object