40 for 40: Profiles of individuals integral to Khoury’s success
Quality Over Quantity – A Neural Network Pioneer
Ronald J. Williams is a retired professor at Northeastern’s Khoury College of Computer Sciences and one of the founding pioneers of neural networks. Today’s neural network platforms such as ChatGPT can trace their origins to his seminal paper, “Learning representations by back-propagating errors” with David E. Rumelhart and Geoffrey E. Hinton. It was a novel method to build and train neural networks; Williams’ research became foundational to the field of machine learning.
Jay Aslam, a Khoury College professor, says, “Ron’s paper launched the field. For much of the history of computer science, neural network-based machine learning was just an idea that could not be realized. In 1986, Rumelhart, Hinton, and Williams proposed the algorithm that made training a neural network truly possible. This spawned great renewed interest in the area and much follow-on work, but the full promise of neural networks had to wait until we had the data and computational resources to implement them at scale.”
Over time, the data and computational resources became available. Aslam continues, “Now, deep learning based neutral networks are at the forefront of AI. This one paper alone has thirty thousand citations, a massive number because it’s that influential.” Williams continued to publish on recurrent neural networks and reinforcement learning, as well as partial order optimum likelihood, a machine learning method used in the prediction of active amino acids in protein structures.
“Ron’s paper launched the field. For much of the history of computer science, neural network-based machine learning was just an idea that could not be realized. In 1986, Rumelhart, Hinton, and Williams proposed the algorithm that made training a neural network truly possible.” — Professor Jay Aslam
Aslam says, “By today’s academic standards, Ron wrote a relatively small number of papers—but those papers were hugely influential. We overlapped at Northeastern for a short time of five years, and we shared the machine learning lab. When I joined, there were only a handful of people doing machine learning research, and much done by him. He was pretty quiet, and for someone who wrote papers that were that influential, he was very humble and down to earth.”
Read More
Quality Over Quantity – A Neural Network Pioneer
Ronald J. Williams is a retired professor at Northeastern’s Khoury College of Computer Sciences and one of the founding pioneers of neural networks. Today’s neural network platforms such as ChatGPT can trace their origins to his seminal paper, “Learning representations by back-propagating errors” with David E. Rumelhart and Geoffrey E. Hinton. It was a novel method to build and train neural networks; Williams’ research became foundational to the field of machine learning.
Jay Aslam, a Khoury College professor, says, “Ron’s paper launched the field. For much of the history of computer science, neural network-based machine learning was just an idea that could not be realized. In 1986, Rumelhart, Hinton, and Williams proposed the algorithm that made training a neural network truly possible. This spawned great renewed interest in the area and much follow-on work, but the full promise of neural networks had to wait until we had the data and computational resources to implement them at scale.”
Over time, the data and computational resources became available. Aslam continues, “Now, deep learning based neutral networks are at the forefront of AI. This one paper alone has thirty thousand citations, a massive number because it’s that influential.” Williams continued to publish on recurrent neural networks and reinforcement learning, as well as partial order optimum likelihood, a machine learning method used in the prediction of active amino acids in protein structures.
“Ron’s paper launched the field. For much of the history of computer science, neural network-based machine learning was just an idea that could not be realized. In 1986, Rumelhart, Hinton, and Williams proposed the algorithm that made training a neural network truly possible.” — Professor Jay Aslam
Aslam says, “By today’s academic standards, Ron wrote a relatively small number of papers—but those papers were hugely influential. We overlapped at Northeastern for a short time of five years, and we shared the machine learning lab. When I joined, there were only a handful of people doing machine learning research, and much done by him. He was pretty quiet, and for someone who wrote papers that were that influential, he was very humble and down to earth.”