show that ERMHnk is computationally hard for every k ≥ 3.

1. In this exercise, we present several classes for which finding an ERM classifier is computationally hard. First, we introduce the class of n-dimensional halfspaces, HSn, for a domain = Rn. This is the class of all functions of the form hw,b(x) = sign(_w,x_+b)where w,x∈Rn, _w,x_ is their inner product, and ∈R. See a detailed description in

2. Show that ERMover the class HHSn of linear predictors is computationally hard.More precisely, we consider the sequence of problems in which the dimension grows linearly and the number of examples is set to be some constant times nHint: You can prove the hardness by a reduction from the following problem: Max FS: Given a system of linear inequalities, Ab with ∈ Rm×and b∈R(that is, a systemof linear inequalities in variables, x=(x1, . . ., xn )), find a subsystem containing as many inequalities as possible that has a

solution (such a subsystem is called feasible). It has been shown (Sankaran 1993) that the problem Max FS is NP-hard. Show that any algorithm that finds an ERMHSn hypothesis for any training sample ∈ (R×{+1,−1})can be used to solve the Max FS problem of size m,nHint: Define a mapping that transforms linear inequalities in variables into labeled points in Rn, and a mapping that transforms vectors in Rto halfspaces, such that a vector w satisfies an inequality if and only if the labeled point that corresponds to is classified correctly by the halfspace corresponding to w. Conclude that the problem of empirical risk minimization for halfspaces in also NP-hard (that is, if it can be solved in time polynomial in the sample size, m, and the Euclidean dimension, n, then every problem in the class NP can be solved in polynomial time).

 

Q35;

1. Let =Rand let Hnkbe the class of all intersections of k-many linear halfspaces in Rn. In this exercise, we wish to show that ERMHnk is computationally hard for every ≥ 3. Precisely, we consider a sequence of problems where ≥ 3 is a constant and grows linearly. The training set size, m, also grows linearly with n. Toward this goal, consider the k-coloring problem for graphs, defined as follows: Given a graph = (VE), and a number k, determine whether there exists a function →{1. . .k} so that for every (u,v) ∈ E(u) _= (v). The k-coloring problem is known to be NP-hard for every ≥ 3 (Karp 1972). We wish to reduce the k-coloring problem to ERMHnk : that is, to prove that if

there is an algorithm that solves the ERMHnk problem in time polynomial in kn, and the sample size m, then there is a polynomial time algorithm for the graph k-coloring problem. Given a graph = (VE), let {v. . .vn } be the vertices in . Construct a sample S(G) ∈ (R×{±1})m, where = |V| + |E|, as follows: _ For every vi ∈ V, construct an instance ewith a negative label. _ For every edge (vi ,v j ) ∈ E, construct an instance (e+e)/2 with a positive

label.

find the cost of your paper

Suggest a modification of the binary search algorithm that emulates this strategy for a list of names.

1. Suppose that a list contains the values 20 44 48 55 62 66 74 88 93 99 at index positions 0 through 9. Trace the values of the variables….

Explain why insertion sort works well on partially sorted lists.

1. Which configuration of data in a list causes the smallest number of exchanges in a selection sort? Which configuration of data causes the largest number of exchanges? 2. Explain….

Draw a class diagram that shows the relationships among the classes in this new version of the system

Jack decides to rework the banking system, which already includes the classes BankView, Bank, SavingsAccount, and RestrictedSavingsAccount. He wants to add another class for checking accounts. He sees that savings….