ECE 51216, Spring 2023 Digital Systems Design Automation
Project Submission Guidelines
Please read the guidelines and information in this document carefully – they will help you prepare your project report and code submission so as to receive highest credit. DEADLINE
- WHAT TO SUBMIT
You will need to submit the following: ● A project report that clearly documents the motivation/rationale, algorithms and data structures chosen, experimental meth odology, results obtained, and an analysis of the results and conclusions. The requirements for the project report will vary based on which option you have chosen (1 - build, 2 – improve, or 3 – apply). Please read the guidelines provided in Section 2. ● For project teams pursuing Option 1 (build) or Option 2 (improve), a self -contained code package (.zip or .tar.gz file) containing the code that you have implemented for the SAT solver, makefiles/compilation scripts, and benchmarks used for evaluation. The cod e must follow the guidelines specified in Sections 3/4. I cannot evaluate (and therefore cannot give credit for) code that does not compile or run, so please make sure that you provide a submission that is self -contained and works on the specified machines . ● For teams pursuing Option 3 (apply SAT) or a custom project, you must reach out and schedule a time to make a 15 -minute presentation to me on your project. The presentation must be on or before May 4th 2023 (see Section 2 for details).
All code and projec t report submissions must be made on Brightspace (note that Brightspace allows you to include multiple attachments with your submission, so use one for the report and one for the code).
- PROJECT REPORT
You are free to choose your own writing style for t he report, provided that it includes at least the following information. Feel free to add additional material that you think is relevant.
● Option 1 (Build) : In this case, your report should (i) provide a pseudo -code description of the basic algorithm (DPLL ) and all the major heuristics implemented (BCP, conflict clause learning, non -chronological backtracking, etc.), (ii) explain the data structures used and justify your choices, (iii) describe the experimental methodology, benchmarks used to evaluate your implementation, and the results obtained, (iv) analyze the results ( e.g., how does the run -time of your implementation vary with problem size, what is the effect of turning on/off each of the heuristics, etc.), and (v) summarize the significa nt conclusions that can be drawn from your results. ● Option 2 (Improve) : For this case, your report should (i) introduce and motivate the advanced heuristic that you chose to implement (why did you think the chosen technique would be useful, and under what scenarios), (ii) describe at a conceptual level ( e.g., pseudo - code) the changes you made to the existing SAT solver, (iii) describe the experimental methodology (which SAT solver was used, benchmarks used, and platform used) and the results obtained, (iv) analyze the results and discuss the impact of the implemented technique (when / by how much does the performance of the original SAT solver improve, and why), and (v) summarize any significant conclusions that can be drawn from your results, including how well the improvement works and when it works or does not work best. ● Option 3 (Apply SAT) : For this case, your report should (i) introduce the problem that you are formulating as a SAT problem, (ii) motivate why SAT is a relevant and potentially promising technique, i.e., why did you think this problem was suited to be solved by a SAT solver rather than other techniques, (iii) describe clearly how the problem was formulated as (mapped to) a SAT problem, (iv) describe the experimental methodology (which SAT solver was used, benchmarks used, and platform used) and the results obtained, and (v) analyze the results obtained and explain what they mean in the context of the original problem you are targeting to solve. o In addition to the project report, schedule a 30 -minute meeting with me to present your project . ● Custom Projects (not based on SAT) : Describe the problem you are trying to solve, motivate why it is relevant/important, the novel aspects (i.e., what is new that has not been done before) if an y, solution approach, and results. o In addition to the project report, schedule a 30 -minute meeting with me to present your project.
You are strongly encouraged to use figures, graphs, and tables whenever appropriate to support your explanation . For exampl e, consider drawing figures that illustrate your data structures, and presenting your results in graphs and/or tables.
- INTERFACE FOR YOUR SAT SOLVER Your SAT solver should be run using the following command line interface:
where “mySAT” is the name of the executable, and “benchmark.cnf” is the name of the CNF benchmark.
Any additional command line options must be optional and described clearly in the README file (see Section 4).
Your SAT solver MUST print out the following information to standard output (stdout) if it finds the CNF formula to be satisfiable:
RESULT: SAT ASSIGNMENT: a=1 b=0 c=1 … The first line simply says the formula is satisfiable. The second line is the assignment of values to the variables that makes th e CNF formula TRUE (replace a, b, c, … by the actual names of the variables in the CNF formula). Be sure to follow this exact format since I have a script that looks for this output. Failure to do this may cause you to not receive proper credit.
Your SAT solver MUST print out the following information it finds the CNF formula to be unsatisfiable:
Be sure to follow this exact format since I have a script that looks for this output. Failure to do this may cause you to not receive proper c redit. The words “SAT” and “UNSAT” should not appear anywhere else in the program output other than to report the final result of the solver.
- CODE SUBMISSION
Please submit your code on Brightspace as a single file (.zip or .tar.gz). The submission must be self-contained , i.e., it should include all packages used other than standard tools like the compiler, standard libraries and utilities already installed on most UNIX systems. In particular, the submission must include all source code, benchmarks used in your testing, and makefiles or compilation scripts. I will test (compile and run) all code submissions on an ECN UNIX computer. Specifically, your code must compile and run on one of the following machines (all of which you should be able to access): ● (Preferred): ececomp.ecn.purdue.edu ● (Under special circumstances only) : I am willing to run your code on other ECN machines provided you clearly specify which one. For example, if you need a platform with a larger number of processors/cores.
Documentation : The submission must contain a README file explaining (i) the directory structure and what code is contained in each source/header file, (ii) how to compile your code, and (iii) how to run your code (for option 1 and 2, you must adhere to the interface described in Section 3). You must document important data structures and functions in your source code. Please note that 10% of the project score will be allocated for documentation. Compilation : There must be a single command to compile your so urce code into an executable (either a makefile, or a compilation script). Execution : For Options 1 and 2, see Section 3. For Option 3, describe how I should run your code. There must be a clear description of the command line interface used to run your pr ogram. All benchmarks that you used to test the code must be included in your submission. In addition, I may execute your code on additional benchmarks.
The project submissions will be graded on a scale of 100, which will be broken down as give n below.
For Option 1 (build), the breakdown of credit will be as follows: ● 50% for functionality and correct implementation – the algorithm and heuristics implemented, and whether the code compiles and runs correctly on the specified platform. ● 15% for pe rformance – how efficient your implementation is in CPU time and memory usage, how do these requirements scale with problem size/difficulty ● 10% for clear source code documentation and README file. ● 25% for project report (see Section 2 for requirements)
For Option 2 (improve), the breakdown of credit will be as follows: ● 50% for functionality and correct implementation – the advanced heuristics chosen, and whether the code compiles and runs correctly on the specified platform. ● 15% for performance – how efficient your implementation is in CPU time and memory usage, how do these requirements scale with problem size/difficulty ● 10% for clear code documentation and README file. ● 25% for project report (see Section 2 for requirements)
For Option 3 (apply) and custom projects not involving SAT, a typical breakdown of credit will be as follows. Due to the wide range of projects in this category, I may have to change the allocation according to the nature of the project: ● 50% for problem selection, formulation and solution approach – choice of problem, how the problem was formulated, and solution approach. ● 25% for implementation and demonstrated results. ● 25% for project report and presentation (see Section 2 for requirements)