Linear Regression by Hand with Gradient D and Vis

The following code will apply gradient descent to simple linear regression. To properly see the visual element, you will need to view your plots inline (I use the Spyder IDE). To use my code, it is best to understand each line, make updates (such as paths), etc. If you simply copy/paste, it is not likely to run.

HOW TO Update the Plots to Show Inline in the Spyder IDE.

The Code:

# -*- coding: utf-8 -*-
"""
Created on Thu Oct 27 19:43:02 2022

@author: profa
"""

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import time

plt.rcParams["figure.figsize"] = (12, 9)
datafile="C:/Users/profa/Desktop/UCB/Classes/NNCSCI5922/Code/HeartRisk_JustNums_noLabels_X_y.csv"
## Data looks like this...
#     Cholesterol  Weight
# 0           251     267
# 1           105     103
# 2           156     193
# 3           309     250
# 4           198     210
# 5           189     189

## NOTES ##
##
## You can change the epochs
## You can change the learning rate
## 

InputData=pd.read_csv(datafile)
print(InputData)
X=InputData.iloc[:,0]
y=InputData.iloc[:,1]

print(X)
print(type(X))
print(y)

plt.scatter(X, y)
plt.show()

## Set up initial weights and biases
m = 0
b = 0

## Learning rate
LR = .00001

epochs=35

## What is our Loss function?
## How do we calculate the error
## between y_hat and y (what
## we predict and what is true?)
## 
## MSE = 1/n SUM (y_hat - y)^2

epochslist=[]
AllErrors=[]
TotalE = 0
n = len(X)

for i in range(epochs):
    print("Epoch \n", i)
    epochslist.append(i)
    y_hat = m*X + b
    #print("y_hat is\n", y_hat)
    #print("y is\n", y)
    
    ## Error at each step
    Error=(1/n) * ((y_hat - y)**2)## MSE
    print(" Mean Error is\n", np.mean(Error))
  
    AllErrors.append(np.mean(Error))
    #print(Error)
    dL_dm = (1/n)* sum(X * (y_hat - y ))
    dL_db = (1/n) * sum(y_hat - y )
  
    ## Use the gradient to update m and b
    m = m - LR * dL_dm
    b = b - LR * dL_db
    
    plt.scatter(X,y)
    plt.plot([min(X), max(X)], [min(y_hat), max(y_hat)], color="green")
    plt.show()
    time.sleep(.5)
    
    ## Notice that as the accuracy improves
    ## as the difference between y_hat and y
    ## reduces, the updates will be smaller.



## Predictions
y_hat = m*X + b

## See the results
plt.scatter(X,y)
plt.plot([min(X), max(X)], [min(y_hat), max(y_hat)], color="green")
plt.show()
time.sleep(5)

print(epochslist)
print(AllErrors)

plt.scatter(epochslist, AllErrors)