1. Homepage
  2. Programming
  3. DTS205TC High Performance Computing - Lab 2 Networks and Lab 3 Monte Carlo to calculate π.

DTS205TC High Performance Computing - Lab 2 Networks and Lab 3 Monte Carlo to calculate π.

Engage in a Conversation
ChinaXJTLUDTS205TCHigh Performance ComputingNetworksMonte CarloPython

DTS205TC High Performance Computing Lab 2 CourseNana.COM

Overview CourseNana.COM

Chatting is a usual web application. We simulate two chatters, A and B, with program(s). Among them, A reads the user input and sends it to B; B reverses the message and converts it to uppercase, and prints it to the screen. The basic process is as follows: CourseNana.COM

message = '' CourseNana.COM

def A():
global message CourseNana.COM

message = input() CourseNana.COM

def B():
global message CourseNana.COM

message = message[::-1].upper() CourseNana.COM

if __name__ == '__main__': A() CourseNana.COM

B() print(message) CourseNana.COM

Among them, we use a global variable to achieve message passing between A and B. You need to modify the program to ensure that the program still exhibits the same visible behavior to the user, but its internal implementation can be different: CourseNana.COM

  1. 1)  Place A and B in different sub-processes, and use the SharedMemory of Python's multiprocessing module to transmit the message; (5 marks)
  2. 2)  Place A and B in different sub-processes, and use the Pipe of Python's multiprocessing module to transmit the message; (5 marks)
  3. 3)  Place A and B in one process on client side, and implement an UDP server in another program to transmit the message; (5 marks)
  4. 4)  Place A and B in one process on client side, and implement an TCP server in another program to transmit the message. The TCP server also has two sub-processes, each of which listens to a unique port and interact with A or B. The TCP server uses Pipe to forward the message between its two sub- processes. (5 marks)

Note: The practice is designed to deepen your understanding on networks. Do not call off-the-shelf libraries. CourseNana.COM


CourseNana.COM


CourseNana.COM

CourseNana.COM

DTS205TC High Performance Computing Lab 3  CourseNana.COM

Overview CourseNana.COM

Using the Monte Carlo method to calculate π is a classic example of parallel computing. A serial version of the program is as follows: CourseNana.COM

import numpy as np CourseNana.COM

# try to hit the unit circle CourseNana.COM

def hit_circle(num):
# sampling in square
x = np.random.uniform(low=-1, high=1, size=(num,)) y = np.random.uniform(low=-1, high=1, size=(num,)) CourseNana.COM

# hit or not CourseNana.COM

h = (np.square(x) + np.square(y)) <= 1 CourseNana.COM

return h CourseNana.COM

# calculate pi with hit record CourseNana.COM

def calc_pi(sam):
return 4 * np.sum(sam) / sam.shape[0] CourseNana.COM

M = 10 ** 4 T=4 CourseNana.COM

# do sampling in batch CourseNana.COM

hits = np.array([]) CourseNana.COM

for i in range(T):
hits = np.hstack((hits, hit_circle(M))) CourseNana.COM

print(f'pi={calc_pi(hits)}') CourseNana.COM

It is worth noting that the sampling process in the above code is done in batches. That is, it is divided into T steps, and M samples are taken in each step, which can be used as the basis for our parallelization. CourseNana.COM

1) Based on the master-slave method, use mpi4py to implement an MPI version of Monte Carlo to calculate π. (5 marks) CourseNana.COM

TIPS: The scaffolding code is as follows (the function calc_pi, hit_circle can be found in the serial version) CourseNana.COM

import numpy as np from mpi4py import MPI CourseNana.COM

# environment info CourseNana.COM

comm = MPI.COMM_WORLD rank = comm.Get_rank() nproc = comm.Get_size() CourseNana.COM

# number of tasks CourseNana.COM

T = nproc - 1
# total num. of sampling CourseNana.COM

M = 10 ** 2
if rank == 0: # master CourseNana.COM

assert nproc > 1
# ======================================================== # ==== add your own code here ============================ # ======================================================= CourseNana.COM

else: # slave
# ========================================================
CourseNana.COM

The result of a run is: CourseNana.COM

2) Change the number of tasks, test the running time of program 2) for 5 times respectively, and then fill in the table below. The total number of samples for all tasks is recommended (not compulsory) to
be set to N=10 . In this way, when there are K tasks, each task needs to perform M=N/K sampling. CourseNana.COM

Num. of Processes 1
Running Time (s)
Max number of hardware threads on your computer
CourseNana.COM

analyze why average runtimes decrease, increase, or remain unchanged as the number of processes increases. (5 marks) CourseNana.COM

Note: CourseNana.COM

The total number of samples can be adjusted to make the phenomenon clear, based on your machine configurations. CourseNana.COM

If you use Linux, you can use the time command for timing; If you use a different OS, please find a similar command by yourself. CourseNana.COM

3) Based on the work pool method, use mpi4py to implement an MPI version of Monte Carlo to calculate π. (5 marks) CourseNana.COM

TIPS: The scaffolding code is as follows (the function calc_pi, hit_circle can be found in the serial version). It takes the total number of tasks as the input parameter. CourseNana.COM

import sys CourseNana.COM

import numpy as np from mpi4py import MPI CourseNana.COM

# environment info CourseNana.COM

comm = MPI.COMM_WORLD rank = comm.Get_rank() nproc = comm.Get_size() CourseNana.COM

# allocate window for completed tasks CourseNana.COM

datatype = MPI.INT
itemsize
= datatype.Get_size() # get size of datatype CourseNana.COM

num_tasks_done = np.array(0, dtype='i') # buffer N = num_tasks_done.size CourseNana.COM

win_size = N * itemsize if rank == 0 else 0
win = MPI.Win.Allocate(win_size, comm=comm) # allocate window CourseNana.COM

# number of tasks CourseNana.COM

T = int(sys.argv[1]) M=10**2 #sizeofsampling CourseNana.COM

if rank == 0: # manager assert nproc > 1 CourseNana.COM

# ======================================================== CourseNana.COM

# ==== add your own code here ============================ CourseNana.COM

# ======================================================== CourseNana.COM

else: # worker
# ========================================================
CourseNana.COM

# ==== add your own code here ============================ CourseNana.COM

# ======================================================== CourseNana.COM

win.Free() CourseNana.COM

The result of a run is: CourseNana.COM

It is important to note the tasks performed by each worker are uncertain and may vary from run to run. CourseNana.COM

4) Fix the number of worker tasks to the maximum number of threads your hardware can support. Change the workload (number of samples) per task, test the overall running time of program 4) respectively, and then fill in the table below. The total number of samples for all tasks is CourseNana.COM

7 recommended (not compulsory) to be set to N=10 . CourseNana.COM

Workload Size CourseNana.COM

Analyze why runtimes decrease, increase, or remain unchanged as the Workload Size increases. (5 marks) CourseNana.COM

Note: The total number of samples and the workload sizes in above table can be adjusted to make the phenomenon clear, based on your machine configurations. CourseNana.COM

Get in Touch with Our Experts

WeChat WeChat
Whatsapp WhatsApp
China代写,XJTLU代写,DTS205TC代写,High Performance Computing代写,Networks代写,Monte Carlo代写,Python代写,China代编,XJTLU代编,DTS205TC代编,High Performance Computing代编,Networks代编,Monte Carlo代编,Python代编,China代考,XJTLU代考,DTS205TC代考,High Performance Computing代考,Networks代考,Monte Carlo代考,Python代考,Chinahelp,XJTLUhelp,DTS205TChelp,High Performance Computinghelp,Networkshelp,Monte Carlohelp,Pythonhelp,China作业代写,XJTLU作业代写,DTS205TC作业代写,High Performance Computing作业代写,Networks作业代写,Monte Carlo作业代写,Python作业代写,China编程代写,XJTLU编程代写,DTS205TC编程代写,High Performance Computing编程代写,Networks编程代写,Monte Carlo编程代写,Python编程代写,Chinaprogramming help,XJTLUprogramming help,DTS205TCprogramming help,High Performance Computingprogramming help,Networksprogramming help,Monte Carloprogramming help,Pythonprogramming help,Chinaassignment help,XJTLUassignment help,DTS205TCassignment help,High Performance Computingassignment help,Networksassignment help,Monte Carloassignment help,Pythonassignment help,Chinasolution,XJTLUsolution,DTS205TCsolution,High Performance Computingsolution,Networkssolution,Monte Carlosolution,Pythonsolution,