Find a decision tree of depth 2 that attains zero training error. 20.1 Neural Networks are universal approximators: Let f : [ − 1, 1]n → [ − 1, 1] be a ρ-Lipschitz function.

1. Find a decision tree of depth 2 that attains zero training error. 20.1 Neural Networks are universal approximators: Let : [ − 11]→ [ − 11] be a ρ-Lipschitz function. Fix some _ > 0. Construct a neural network : [− 11]→ [ − 11], with the sigmoid activation function, such that for every x ∈ [ − 11]it

holds that | (x)− N(x)| ≤ _Hint: Similarly to the proof of Theorem 19.3, partition [−11]into small boxes. Use the Lipschitzness of to show that it is approximately constant at each box Finally, show that a neural network can first decide which box the input vector belongs to, and then predict the averaged value of at that box.

2. Prove Theorem 20.5.

Hint: For every : {−1,1}→ {−1,1} construct a 1-Lipschitz function : [−11]→[−11] such that if you can approximate then you can express .

 

find the cost of your paper

Suggest a modification of the binary search algorithm that emulates this strategy for a list of names.

1. Suppose that a list contains the values 20 44 48 55 62 66 74 88 93 99 at index positions 0 through 9. Trace the values of the variables….

Explain why insertion sort works well on partially sorted lists.

1. Which configuration of data in a list causes the smallest number of exchanges in a selection sort? Which configuration of data causes the largest number of exchanges? 2. Explain….

Draw a class diagram that shows the relationships among the classes in this new version of the system

Jack decides to rework the banking system, which already includes the classes BankView, Bank, SavingsAccount, and RestrictedSavingsAccount. He wants to add another class for checking accounts. He sees that savings….