Derive stochastic gradient-descent steps for the variation of L1-loss classification introduced in Exercise 6.

1.       Derive stochastic gradient-descent steps for the variation of L1-loss classification introduced in Exercise 6. You can use a constant step size.

2.       Derive stochastic gradient-descent steps for SVMs with quadratic loss instead of hinge loss. You can use a constant step size.

3.       Provide an algorithm to perform classification with explicit kernel feature transformation and the Nystr¨om approximation. How would you use ensembles to make the algorithm efficient and accurate?

4.       Consider an SVM with properly optimized parameters. Provide an intuitive argument as to why the out-of-sample error rate of the SVM will be usually less than the fraction of support vectors in the training data.

find the cost of your paper

Suggest a modification of the binary search algorithm that emulates this strategy for a list of names.

1. Suppose that a list contains the values 20 44 48 55 62 66 74 88 93 99 at index positions 0 through 9. Trace the values of the variables….

Explain why insertion sort works well on partially sorted lists.

1. Which configuration of data in a list causes the smallest number of exchanges in a selection sort? Which configuration of data causes the largest number of exchanges? 2. Explain….

Draw a class diagram that shows the relationships among the classes in this new version of the system

Jack decides to rework the banking system, which already includes the classes BankView, Bank, SavingsAccount, and RestrictedSavingsAccount. He wants to add another class for checking accounts. He sees that savings….