A tribute to Ron Williams, Khoury professor and machine learning pioneer
Tue 03.05.24 / Milton Posner
A tribute to Ron Williams, Khoury professor and machine learning pioneer
Tue 03.05.24 / Milton Posner
Tue 03.05.24 / Milton Posner
Tue 03.05.24 / Milton Posner
A tribute to Ron Williams, Khoury professor and machine learning pioneer
Tue 03.05.24 / Milton Posner
A tribute to Ron Williams, Khoury professor and machine learning pioneer
Tue 03.05.24 / Milton Posner
Tue 03.05.24 / Milton Posner
Tue 03.05.24 / Milton Posner
Ronald J. Williams, a professor emeritus of computer science at Khoury College and a pioneer in the field of neural networks, passed away last month. He was 79 years old.
Among Williams’ contributions to computing over his 22 years at Khoury College, it was his paper “Learning representations by back-propagating errors,” co-authored with David Rumelhart and Geoffrey Hinton, that rang the loudest and longest. In proposing a novel method for building and training neural networks, the trio laid the groundwork for the eventual development of neural-network-based platforms such as ChatGPT.
“For much of the history of computer science, neural-network-based machine learning was just an idea that could not be realized. Then in 1986, Rumelhart, Hinton, and Williams proposed the algorithm that made training a neural network truly possible,” Khoury professor Jay Aslam said in 2023. “Ron’s paper launched the field.”
At the time, the tech world lacked the data and computational resources to scale the paper’s findings and realize its vision. But over time, as the paper sparked interest in neural networks, other computer scientists followed in the trio’s footsteps, and the resources grew to match the ideas.
“Now, deep-learning-based neural networks are at the forefront of AI, and this one paper has 30,000 citations,” Aslam said. “It’s that influential.”
Williams continued to research neural networks and reinforcement learning, as well as partial order optimum likelihood, a machine learning method used to predict active amino acids in protein structures. His publishing output, while small by today’s standards, packed an influential punch.
“We overlapped at Northeastern for five years, and we shared the machine learning lab,” Aslam remembered. “When I joined, there were only a handful of people doing machine learning research, much of it done by him. He was pretty quiet, and for someone who wrote papers that were that influential, he was very humble and down to earth.”
Before joining Northeastern in 1986, Williams worked for a defense contractor, where he developed algorithms to aid the US military in finding Soviet submarines. Outside of work, Williams was an avid musician. During his undergraduate and doctoral studies in his native Southern California, he, his guitar, his keyboard, and his band could be found gracing numerous local bars at night. He found additional years-long passions in trivia, bridge, and skiing, and grew to love the sports teams of his adopted home of Boston.
He is survived by his wife Pam, his three children, and his five grandchildren.
Ronald J. Williams, a professor emeritus of computer science at Khoury College and a pioneer in the field of neural networks, passed away last month. He was 79 years old.
Among Williams’ contributions to computing over his 22 years at Khoury College, it was his paper “Learning representations by back-propagating errors,” co-authored with David Rumelhart and Geoffrey Hinton, that rang the loudest and longest. In proposing a novel method for building and training neural networks, the trio laid the groundwork for the eventual development of neural-network-based platforms such as ChatGPT.
“For much of the history of computer science, neural-network-based machine learning was just an idea that could not be realized. Then in 1986, Rumelhart, Hinton, and Williams proposed the algorithm that made training a neural network truly possible,” Khoury professor Jay Aslam said in 2023. “Ron’s paper launched the field.”
At the time, the tech world lacked the data and computational resources to scale the paper’s findings and realize its vision. But over time, as the paper sparked interest in neural networks, other computer scientists followed in the trio’s footsteps, and the resources grew to match the ideas.
“Now, deep-learning-based neural networks are at the forefront of AI, and this one paper has 30,000 citations,” Aslam said. “It’s that influential.”
Williams continued to research neural networks and reinforcement learning, as well as partial order optimum likelihood, a machine learning method used to predict active amino acids in protein structures. His publishing output, while small by today’s standards, packed an influential punch.
“We overlapped at Northeastern for five years, and we shared the machine learning lab,” Aslam remembered. “When I joined, there were only a handful of people doing machine learning research, much of it done by him. He was pretty quiet, and for someone who wrote papers that were that influential, he was very humble and down to earth.”
Before joining Northeastern in 1986, Williams worked for a defense contractor, where he developed algorithms to aid the US military in finding Soviet submarines. Outside of work, Williams was an avid musician. During his undergraduate and doctoral studies in his native Southern California, he, his guitar, his keyboard, and his band could be found gracing numerous local bars at night. He found additional years-long passions in trivia, bridge, and skiing, and grew to love the sports teams of his adopted home of Boston.
He is survived by his wife Pam, his three children, and his five grandchildren.
Ronald J. Williams, a professor emeritus of computer science at Khoury College and a pioneer in the field of neural networks, passed away last month. He was 79 years old.
Among Williams’ contributions to computing over his 22 years at Khoury College, it was his paper “Learning representations by back-propagating errors,” co-authored with David Rumelhart and Geoffrey Hinton, that rang the loudest and longest. In proposing a novel method for building and training neural networks, the trio laid the groundwork for the eventual development of neural-network-based platforms such as ChatGPT.
“For much of the history of computer science, neural-network-based machine learning was just an idea that could not be realized. Then in 1986, Rumelhart, Hinton, and Williams proposed the algorithm that made training a neural network truly possible,” Khoury professor Jay Aslam said in 2023. “Ron’s paper launched the field.”
At the time, the tech world lacked the data and computational resources to scale the paper’s findings and realize its vision. But over time, as the paper sparked interest in neural networks, other computer scientists followed in the trio’s footsteps, and the resources grew to match the ideas.
“Now, deep-learning-based neural networks are at the forefront of AI, and this one paper has 30,000 citations,” Aslam said. “It’s that influential.”
Williams continued to research neural networks and reinforcement learning, as well as partial order optimum likelihood, a machine learning method used to predict active amino acids in protein structures. His publishing output, while small by today’s standards, packed an influential punch.
“We overlapped at Northeastern for five years, and we shared the machine learning lab,” Aslam remembered. “When I joined, there were only a handful of people doing machine learning research, much of it done by him. He was pretty quiet, and for someone who wrote papers that were that influential, he was very humble and down to earth.”
Before joining Northeastern in 1986, Williams worked for a defense contractor, where he developed algorithms to aid the US military in finding Soviet submarines. Outside of work, Williams was an avid musician. During his undergraduate and doctoral studies in his native Southern California, he, his guitar, his keyboard, and his band could be found gracing numerous local bars at night. He found additional years-long passions in trivia, bridge, and skiing, and grew to love the sports teams of his adopted home of Boston.
He is survived by his wife Pam, his three children, and his five grandchildren.
Ronald J. Williams, a professor emeritus of computer science at Khoury College and a pioneer in the field of neural networks, passed away last month. He was 79 years old.
Among Williams’ contributions to computing over his 22 years at Khoury College, it was his paper “Learning representations by back-propagating errors,” co-authored with David Rumelhart and Geoffrey Hinton, that rang the loudest and longest. In proposing a novel method for building and training neural networks, the trio laid the groundwork for the eventual development of neural-network-based platforms such as ChatGPT.
“For much of the history of computer science, neural-network-based machine learning was just an idea that could not be realized. Then in 1986, Rumelhart, Hinton, and Williams proposed the algorithm that made training a neural network truly possible,” Khoury professor Jay Aslam said in 2023. “Ron’s paper launched the field.”
At the time, the tech world lacked the data and computational resources to scale the paper’s findings and realize its vision. But over time, as the paper sparked interest in neural networks, other computer scientists followed in the trio’s footsteps, and the resources grew to match the ideas.
“Now, deep-learning-based neural networks are at the forefront of AI, and this one paper has 30,000 citations,” Aslam said. “It’s that influential.”
Williams continued to research neural networks and reinforcement learning, as well as partial order optimum likelihood, a machine learning method used to predict active amino acids in protein structures. His publishing output, while small by today’s standards, packed an influential punch.
“We overlapped at Northeastern for five years, and we shared the machine learning lab,” Aslam remembered. “When I joined, there were only a handful of people doing machine learning research, much of it done by him. He was pretty quiet, and for someone who wrote papers that were that influential, he was very humble and down to earth.”
Before joining Northeastern in 1986, Williams worked for a defense contractor, where he developed algorithms to aid the US military in finding Soviet submarines. Outside of work, Williams was an avid musician. During his undergraduate and doctoral studies in his native Southern California, he, his guitar, his keyboard, and his band could be found gracing numerous local bars at night. He found additional years-long passions in trivia, bridge, and skiing, and grew to love the sports teams of his adopted home of Boston.
He is survived by his wife Pam, his three children, and his five grandchildren.