Specify the parameters of Lipschitzness and smoothness.

1. Construct an example showing that the 0−1 loss function may suffer from local minima; namely, construct a training sample ∈ (×{±1})(say, for = R2), for which there exist a vector w and some _ >0 such that

2. For any w_ such that                 w_  ≤we have LS(w) ≤ LS (w_) (where the loss here is the 0−1 loss). This means that w is a local minimum of LS .

3. There exists some w∗ such that LS(w∗) <>LS(w). This means that w is not a global minimum of LS .

4. Consider the learning problemof logistic regression: LetH=={x∈R:                 x              ≤ B}, for some scalar 0, let = {±1}, and let the loss function      be defined a (w(xy)) = log(1 + exp ( − y_w,x_)). Show that the resulting learning problem is both convex-Lipschitz-bounded and convex-smooth-bounded. Specify the parameters of Lipschitzness and smoothness.

 

find the cost of your paper

Suggest a modification of the binary search algorithm that emulates this strategy for a list of names.

1. Suppose that a list contains the values 20 44 48 55 62 66 74 88 93 99 at index positions 0 through 9. Trace the values of the variables….

Explain why insertion sort works well on partially sorted lists.

1. Which configuration of data in a list causes the smallest number of exchanges in a selection sort? Which configuration of data causes the largest number of exchanges? 2. Explain….

Draw a class diagram that shows the relationships among the classes in this new version of the system

Jack decides to rework the banking system, which already includes the classes BankView, Bank, SavingsAccount, and RestrictedSavingsAccount. He wants to add another class for checking accounts. He sees that savings….