Solve least squares with graident descent
gdls(A, b, alpha = 0.05, tol = 1e-06, m = 1e+05)
A | a square matrix representing the coefficients of a linear system |
---|---|
b | a vector representing the right-hand side of the linear system |
alpha | the learning rate |
tol | the expected error tolerance |
m | the maximum number of iterations |
the modified matrix
gdls
solves a linear system using gradient descent.
Other linear:
choleskymatrix()
,
detmatrix()
,
invmatrix()
,
iterativematrix
,
lumatrix()
,
refmatrix()
,
rowops
,
tridiagmatrix()
,
vecnorm()
head(b <- iris$Sepal.Length)
#> [1] 5.1 4.9 4.7 4.6 5.0 5.4
head(A <- matrix(cbind(1, iris$Sepal.Width, iris$Petal.Length, iris$Petal.Width), ncol = 4))
#> [,1] [,2] [,3] [,4]
#> [1,] 1 3.5 1.4 0.2
#> [2,] 1 3.0 1.4 0.2
#> [3,] 1 3.2 1.3 0.2
#> [4,] 1 3.1 1.5 0.2
#> [5,] 1 3.6 1.4 0.2
#> [6,] 1 3.9 1.7 0.4
gdls(A, b, alpha = 0.05, m = 10000)
#> Warning: iterations maximum exceeded
#> [,1]
#> [1,] 1.8439696
#> [2,] 0.6538332
#> [3,] 0.7107731
#> [4,] -0.5593274