For those who don't know, Chomsky was the main pioneer of the idea that our brains are hard-wired to learn language. Before him it was widely believed (in particular by the psychologist B.F. Skinner) that all behaviors, including language, are learned by basic conditioning techniques that are shared by all animals - think Pavlov's dogs. Chomsky argued instead that the human brain is primed to spontaneously pick up language, by some cognitive mechanism that is unique to humans; he called this hypothetical mental module the "language acquisition device". This opened up the field of cognitive psychology, which is essentially the study of the basic modules in the human (or animal) mind. Other basic modules would include the idea of 3D space and the number sense (these two not being unique to humans).
The first generation of artificial intelligence algorithms tended to be based on formal logic, and they failed pretty spectacularly. But the new generation (now re-named "machine learning") is based on statistics and probability, and they have been very impressive. The new algorithms typically rely on using huge datasets to train extremely complex models - in many cases there is so much data that the algorithms themselves are actually very simple, and they only work because most problems they will have to solve are extremely identical to a problem they trained on.
From the perspective of engineering applications, this is awesome! But as far as science goes, some people have been arguing that these statistical algorithms can tell us something about how the human brain accomplishes the same task.
In Chomsky's view (and I generally agree with him) they tell us nothing. Google Translate does a good job of mimicing human intelligence, but the mechanisms it uses are fundamentally different from the ones our brain uses, and they don't move us any closer to understanding what's going on inside our heads. Translate, for example, has been trained using all English text that Google can get its hands on, so it has a good idea of how often the word "fox" follows the phrase "quick brown". That kind of crude pattern matching may be good enough for translation tasks, but Translate has no real understanding of what these terms mean. The human brain, in contrast, is (presumably) based more on things like syntax trees. Put another way, modern machine learning techniques can walk like a duck, swim like a duck, and quack like a duck. But that doesn't mean they're a duck; they're more like feathered robots that are acting out videos of duck behavior. No matter how realistic they look, studying their insides won't tell you anything about a real duck's internal organs. And as far as applications, there are critical things a duck can do that even the most realistic robo-duck can't at all, like laying eggs.
The counter argument comes from Peter Norvig of Google (http://norvig.com/chomsky.html). He agrees that machine learning algorithms are different brain what goes on in our brains, but argues that we can still gain a lot of insights. For example, we know a lot about what types of things different machine learning algorithms can and cannot learn. If an algorithm can perform like a human, and there are important mathematical properties of this algorithm, then the corresponding human cognitive module may have a lot of those same properties, even if the algorithm used is different.