1. Homepage
  2. Homework
  3. [2022] STOR566 Introduction to Deep Learning - Homework 1
This question has been solved

[2022] STOR566 Introduction to Deep Learning - Homework 1

Engage in a Conversation
UNCUniversity of North CarolinaSTOR566Introduction to Deep Learning

Introduction to Deep Learning, Fall 2022 CourseNana.COM


CourseNana.COM

Please answer the questions using the jupyter notebook script.
For submission, please include your code, code output and answers in the script and submit it on sakai. Please don’t modify existing cells in the script. But you can add cells between the exercise statements.
CourseNana.COM

To make markdown, please switch the cell type to markdown (from code) - you can hit ’m’ when you are in command mode - and use the markdown language. For a brief tutorial see: https://daringfireball. net/projects/markdown/syntax. CourseNana.COM

1 Problem 1. (10 points) CourseNana.COM

Prove whether the following functions are convex or not. (a) (5points)f(x1,x2)=(x1x2 1)2,wherex1,x2 R. CourseNana.COM

22 (b) (5points)f(w1,w2)=w1 w22,wherew1,w2 R . CourseNana.COM

2 Problem 2. (10 points) CourseNana.COM

Identify stationary points for f(x) = 2x1 + 12x2 + x21 3x2? Are they local minimum/maximum; global minimum/maximum or saddle points? Why? CourseNana.COM

3 Problem 3. (80 points) CourseNana.COM

Given training data {xi , yi }ni=1 , each xi Rd and yi ∈ {+1, 1}, we try to solve the following logistic regression problem by gradient descent: CourseNana.COM

Test the algorithm using the “heart scale” dataset with n = 270 and d = 13: the matrix X is stored in the file “X heart”, and the vector y is stored in the file “y heart”. (“X heart” contains n lines, each line stores a vector CourseNana.COM

xi with d real numbers. “y heart” contains the y vector.) CourseNana.COM

1.  (a)  (5 points) Compute the gradient of f(w) w.r.t. w. CourseNana.COM

2.  (b)  (30 points) Implement the gradient descent algorithm with a fixed step size η. Find a small η1 such that the algorithm converges. Increase the step size to η2 so the algorithm cannot converge. Run 50 iterations and plot the iteration versus log(f(xk) f(x)) plot for η1 and η2. In practice it is impossible to get the exact optimal solution x, so use the minimum value you computed as f(x) when you plot the figure. Report the f(x) value you used for generating the plots. CourseNana.COM

3.  (c)  (5 points) Write down the pseudo code of gradient descent with backtracking line search (σ = 0.01). CourseNana.COM

4.  (d)  (20 points) Implement the gradient descent algorithm with backtracking line search (σ = 0.01). Plot the CourseNana.COM

same iteration versus log(f(xk) f(x)) plot. CourseNana.COM

5.  (e)  (20 points) Test your implementation (gradient descent with backtracking line search) on a larger dataset “epsilonsubset”. Plot the same iteration vs error plot. CourseNana.COM

Get in Touch with Our Experts

WeChat WeChat
Whatsapp WhatsApp
UNC代写,University of North Carolina代写,STOR566代写,Introduction to Deep Learning代写,UNC代编,University of North Carolina代编,STOR566代编,Introduction to Deep Learning代编,UNC代考,University of North Carolina代考,STOR566代考,Introduction to Deep Learning代考,UNChelp,University of North Carolinahelp,STOR566help,Introduction to Deep Learninghelp,UNC作业代写,University of North Carolina作业代写,STOR566作业代写,Introduction to Deep Learning作业代写,UNC编程代写,University of North Carolina编程代写,STOR566编程代写,Introduction to Deep Learning编程代写,UNCprogramming help,University of North Carolinaprogramming help,STOR566programming help,Introduction to Deep Learningprogramming help,UNCassignment help,University of North Carolinaassignment help,STOR566assignment help,Introduction to Deep Learningassignment help,UNCsolution,University of North Carolinasolution,STOR566solution,Introduction to Deep Learningsolution,