1. Suppose that a list contains the values 20 44 48 55 62 66 74 88 93 99 at index positions 0 through 9. Trace the values of the variables….

## Define the loss function as follows.

1. Consider the problem of learning halfspaces with the hinge loss. We limit our domain to the Euclidean ball with radius *R*. That is, *X *={x : x2 ≤ *R*}. The label set is *Y *= {±1} and the loss function * *is defined by * *(w*, *(x*, **y*)) = max{0*,*1− *y*_w*,*x_}. We already know that the loss function is convex. Show that it is *R*-Lipschitz.

2. (*) Convex-Lipschitz-Boundedness Is Not Sufficient for Computational Efficiency: In the next chapter we show that from the statistical perspective, all convex- Lipschitz-bounded problems are learnable (in the agnostic PAC model). However, our main motivation to learn such problems resulted from the computational perspective – convex optimization is often efficiently solvable. Yet the goal of this exercise is to show that convexity alone is not sufficient for efficiency. We show that even for the case *d *= 1, there is a convex-Lipschitz-bounded problem which cannot be learned by any computable learner. Let the hypothesis class be *H*=[0*, *1] and let the example domain, *Z*, be the set of all Turing machines. Define the loss function as follows. For every Turing machine

*T *∈ *Z*, let * *(0*,**T *)=1 if *T *halts on the input 0 and * *(0*,**T *)=0 if *T *doesn’t halt on the input 0. Similarly, let * *(1*,**T *)=0 if *T *halts on the input 0 and * *(1*,**T *)=1 if *T *doesn’t halt on the input 0. Finally, for *h *∈ (0*, *1), let * *(*h**,**T *) = *h** *(0*,**T *)+(1−*h*)* *(1*,**T *).