Use gradient descent to find local minima

graddsc(fp, x, h = 0.001, tol = 1e-04, m = 1000)

gradasc(fp, x, h = 0.001, tol = 1e-04, m = 1000)

gd(fp, x, h = 100, tol = 1e-04, m = 1000)

Arguments

fp

function representing the derivative of f

x

an initial estimate of the minima

h

the step size

tol

the error tolerance

m

the maximum number of iterations

Value

the x value of the minimum found

Details

Gradient descent can be used to find local minima of functions. It will return an approximation based on the step size h and fp. The tol is the error tolerance, x is the initial guess at the minimum. This implementation also stops after m iterations.

See also

Other optimz: bisection(), goldsect, hillclimbing(), newton(), sa(), secant()

Examples

fp <- function(x) { x^3 + 3 * x^2 - 1 }
graddsc(fp, 0)
#> [1] 0.5067967

f <- function(x) { (x[1] - 1)^2 + (x[2] - 1)^2 }
fp <-function(x) {
    x1 <- 2 * x[1] - 2
    x2 <- 8 * x[2] - 8

    return(c(x1, x2))
}
gd(fp, c(0, 0), 0.05)
#> [1] 0.9991405 1.0000000