Highest scored questions
12,769 questions
182
votes
12
answers
62k
views
Could a paradox kill an AI?
In Portal 2 we see that AI's can be "killed" by thinking about a paradox.
I assume this works by forcing the AI into an infinite loop which would essentially "freeze" the computer's consciousness.
...
102
votes
14
answers
8k
views
How could self-driving cars make ethical decisions about who to kill?
Obviously, self-driving cars aren't perfect, so imagine that the Google car (as an example) got into a difficult situation.
Here are a few examples of unfortunate situations caused by a set of events:
...
102
votes
9
answers
18k
views
What is the difference between artificial intelligence and machine learning?
These two terms seem to be related, especially in their application in computer science and software engineering.
Is one a subset of another?
Is one a tool used to build a system for the other?
What ...
102
votes
4
answers
94k
views
How can neural networks deal with varying input sizes?
As far as I can tell, neural networks have a fixed number of neurons in the input layer.
If neural networks are used in a context like NLP, sentences or blocks of text of varying sizes are fed to a ...
100
votes
7
answers
20k
views
Do scientists know what is happening inside artificial neural networks?
Do scientists or research experts know from the kitchen what is happening inside complex "deep" neural network with at least millions of connections firing at an instant? Do they understand ...
99
votes
3
answers
87k
views
What is self-supervised learning in machine learning?
What is self-supervised learning in machine learning? How is it different from supervised learning?
92
votes
6
answers
104k
views
What's the difference between model-free and model-based reinforcement learning?
What's the difference between model-free and model-based reinforcement learning?
It seems to me that any model-free learner, learning through trial and error, could be reframed as model-based. In ...
88
votes
9
answers
8k
views
How is it possible that deep neural networks are so easily fooled?
The following page/study demonstrates that the deep neural networks are easily fooled by giving high confidence predictions for unrecognisable images, e.g.
How this is possible? Can you please explain ...
82
votes
4
answers
117k
views
Why does the transformer do better than RNN and LSTM in long-range context dependencies?
I am reading the article How Transformers Work where the author writes
Another problem with RNNs, and LSTMs, is that it’s hard to parallelize the work for processing sentences, since you have to ...
71
votes
9
answers
11k
views
Why do we need explainable AI?
If the original purpose for developing AI was to help humans in some tasks and that purpose still holds, why should we care about its explainability? For example, in deep learning, as long as the ...
69
votes
4
answers
127k
views
How to select number of hidden layers and number of memory cells in an LSTM?
I am trying to find some existing research on how to select the number of hidden layers and the size of these of an LSTM-based RNN.
Is there an article where this problem is being investigated, i.e., ...
68
votes
10
answers
47k
views
Why is Python such a popular language in the AI field?
First of all, I'm a beginner studying AI and this is not an opinion-oriented question or one to compare programming languages. I'm not implying that Python is the best language. But the fact is that ...
66
votes
4
answers
17k
views
Are neural networks prone to catastrophic forgetting?
Imagine you show a neural network a picture of a lion 100 times and label it with "dangerous", so it learns that lions are dangerous.
Now imagine that previously you have shown it millions ...
66
votes
13
answers
62k
views
In a CNN, does each new filter have different weights for each input channel, or are the same weights of each filter used across input channels?
My understanding is that the convolutional layer of a convolutional neural network has four dimensions: ...
60
votes
11
answers
13k
views
What are some well-known problems where neural networks don't do very well?
Background: It's well-known that neural networks offer great performance across a large number of tasks, and this is largely a consequence of their universal approximation capabilities. However, in ...