diff --git a/Video-Image-Data-Tutorial/Ford_Video_Analysis.ipynb b/Video-Image-Data-Tutorial/Ford_Video_Analysis.ipynb
index c65289a81d9fb1e5b632d9795fcad66fcf22e60c..acbd75012ff4bdb5f7b487246af4482fcac89cc7 100644
--- a/Video-Image-Data-Tutorial/Ford_Video_Analysis.ipynb
+++ b/Video-Image-Data-Tutorial/Ford_Video_Analysis.ipynb
@@ -1,4 +1,5 @@
 {
+<<<<<<< HEAD
   "cells": [
     {
       "cell_type": "markdown",
@@ -230,4 +231,248 @@
   },
   "nbformat": 4,
   "nbformat_minor": 0
+=======
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "OqSmFS3lZj9Y"
+   },
+   "source": [
+    "# <center> Video Image Data </center>\n",
+    "#### CMSE 495 Ford Group\n",
+    "\n",
+    "This tutorial teaches the user how to input a video file, such a mp4 and convert each frame of the video into a jpeg image using python, primarily in a Jupyter notebook."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "uey1neRTkRwd"
+   },
+   "source": [
+    "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pathakis/DataTools_Tutorial_Demo/blob/main/Video-Image-Data-Tutorial/Ford_Video_Analysis.ipynb)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "GQIJxk_kdjT1"
+   },
+   "source": [
+    "<b> Environment Setup (Makefile):</b>\n",
+    "- Use the command 'make innit' automatically set up the environment for you.\n",
+    "\n",
+    "<b> Environment Setup (Manual):</b>\n",
+    "- Set up new environment using pip/conda (Conda Recommended). Use command \n",
+    "\n",
+    "    <code> conda create -n envs python=3.10 </code>\n",
+    "\n",
+    "- Activate your new environment. Use command \n",
+    "\n",
+    "    <code> conda activate envs</code>\n",
+    "\n",
+    "- Install the requisite packages.Use command \n",
+    "\n",
+    "    <code> pip install opencv-python</code> or,\n",
+    "\n",
+    "    <code> conda install -c conda-forge opencv</code>\n",
+    "\n",
+    "<b> Usage Instructions:</b>\n",
+    "\n",
+    "- The example call shows the format in which this func may be used.\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "f0aUW4PLdobE"
+   },
+   "source": [
+    "This process uses 2  packages called [os](https://docs.python.org/3/library/os.html)  and [cv2](https://pypi.org/project/opencv-python/). Os provides  miscellaneous operating system interfaces such as opening and reading the files."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "id": "JGkN_k3BgXV8"
+   },
+   "outputs": [],
+   "source": [
+    "# !pip install opencv-python"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
+   "metadata": {
+    "id": "PCbpVR-HZzmt"
+   },
+   "outputs": [],
+   "source": [
+    "import cv2\n",
+    "import os\n",
+    "import glob\n",
+    "import urllib.request\n",
+    "\n",
+    "\n",
+    "def video_to_frames(file_path, directory_path, greyscale = False):\n",
+    "\n",
+    "  '''This function will change a video file to a frames'''\n",
+    "    \n",
+    "    #opening the video\n",
+    "  vidcap = cv2.VideoCapture(file_path) \n",
+    "    \n",
+    "  dirname = directory_path\n",
+    "  os.makedirs(dirname, exist_ok=True)\n",
+    "  \n",
+    "  #capturing a frame as well as a boolean value representing whether an image was properly opened\n",
+    "  success,image = vidcap.read()\n",
+    "  \n",
+    "  count = 0\n",
+    "  \n",
+    "  while success:\n",
+    "      \n",
+    "    #this is specifically for foam_segmented.avi\n",
+    "    if greyscale:\n",
+    "        image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n",
+    "        \n",
+    "    #writing the image to a the directory path that was specified, \n",
+    "    #if the path specified does not exist then it will be created\n",
+    "    #this finctionality was added so that the images could be stored in a separate folder\n",
+    "    #example of output file names: 1.jpg, 2.jpg, 3.jpg, and so on\n",
+    "    cv2.imwrite(os.path.join(dirname, str(count)+\".jpg\"), image)\n",
+    "    success,image = vidcap.read()\n",
+    "    count += 1\n",
+    "    #All the frames will be added in order\n",
+    "    cv2.waitKey(1) \n",
+    "  \n",
+    "  #releasing the threads\n",
+    "    vidcap.release()\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "hBfLcvMhh6v7"
+   },
+   "source": [
+    "DEMO FOR THE **avi_to_frames** \n",
+    "\n",
+    "1. To download a sample avi file that you want to work with, use the following code `urllib.request.urlretrieve('https://www.engr.colostate.edu/me/facil/dynamics/files/drop.avi', 'testing.mp4')` \n",
+    "\n",
+    "2. After the video has been downloaded `avi_frames(./testing.mp4, path_to_where_you_want_the_frames, False)` this will create a folder with frames from the video."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 2,
+   "metadata": {
+    "id": "-N-hJD11jJjo"
+   },
+   "outputs": [],
+   "source": [
+    "# Making a Video From Frames\n",
+    "def frames_to_video(directory_path, fps, width, height):\n",
+    "  fourcc = cv2.VideoWriter_fourcc(*'mp4v')\n",
+    "  video = cv2.VideoWriter('video.avi', fourcc, fps, (width, height))\n",
+    "  num_frames = len([name for name in os.listdir(directory_path) if os.path.isfile(name)])\n",
+    "\n",
+    "  for j in range(num_frames):\n",
+    "    img = cv2.imread(str(j) + '.jpg')\n",
+    "    video.write(img)\n",
+    "\n",
+    "  cv2.destroyAllWindows()\n",
+    "  video.release()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "sIhvc4DplLkE"
+   },
+   "source": [
+    "DEMO FOR THE **frames_to_video** \n",
+    "\n",
+    "1. We will be working with the frames that we created using `avi_to_frames`. If you have not created those frames feel free to look at the steps above.\n",
+    "\n",
+    "2. Run the following command (**make the necessary changes in the function call**)`frames_to_video(where_the_frames_are, fps, width, height)`\n",
+    "\n",
+    "3. The video will show up in the current directory."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "bnfzxPNJeZVS"
+   },
+   "source": [
+    "**The code below will put the image arrays into a list.** This snippet of code utilizes glob but packages like os can also be used."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 3,
+   "metadata": {
+    "id": "mLbqPW7SeJd-"
+   },
+   "outputs": [],
+   "source": [
+    "path = glob.glob(\"./*.jpg\")\n",
+    "images = []\n",
+    "for img in path:\n",
+    "  n = cv2.imread(img)\n",
+    "  images.append(n)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "GIq-_h4wdxWM"
+   },
+   "source": [
+    "<b> References:</b>\n",
+    "- [Managing Environments](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html)\n",
+    "- [Open CV in python](https://pypi.org/project/opencv-python/)\n",
+    "- [Colab Button](https://www.youtube.com/watch?v=RoGZIbwzG5w)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "**Comments:**\n",
+    "\n",
+    "The videos is not working "
+   ]
+  }
+ ],
+ "metadata": {
+  "colab": {
+   "collapsed_sections": [],
+   "name": "Ford_Video_Analysis.ipynb",
+   "provenance": []
+  },
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.8.8"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 1
+>>>>>>> 5c328fa (this is a testing comment)
 }