Without getting too technical, please note that the standard error changes slightly for each question in a survey, depending upon the answer to that particular question. The more polarized the response, 90% "yes" and 10% "no," for example, the more confidence can be had in the result. The closer the question is to a 50:50 split, the greater the error, and the less confidence that can be placed in the answer. This is one reason that a researcher will often say to pay more attention to the polarized answers, where there is a clear difference in response.
A general rule of thumb is to have at least 100 valid survey responses from each different group you are trying to research (such as 100 valid surveys from current members, 100 from lapsed members, and 100 from listeners who have never been members, and so on). At least 200 responses from each group is even better. To achieve no greater than a plus or minus 5% range of error, 400 valid responses are required per group. Since it is a good idea in most surveys to be able to look at the responses from different types of listeners (members vs. nonmembers, first-year members vs. long-term members, news vs. music core listeners, for example), it is wise to have a sample of sufficient size that these segmented results can be used with confidence.
Once you know how many responses you want to receive, the next question is usually, "how many surveys do we have to send out, or how many calls will we have to make to get the desired amount of response?" The technical term for this is incidence.
The number required varies with each survey and is dependent on a wide variety of factors, including survey length, whether it's a mail or telephone survey, the nature of the target audience, the market location, time of year, and a host of other variables. It is always preferable to have more rather than fewer responses. So if in doubt, send out more surveys or plan to make more calls than you think you may need.