Suicide is a national public health concern, claiming over one million lives each year worldwide. The ability to understand, identify, and respond to suicidal behavior remains a key priority in preventing suicide. As online social networks have grown in accessibility and popularity, it is increasingly common for users to both discuss mental health and receive support from others online. These online conversations have previously been evaluated by analyzing the language features of social media posts and detecting risk factors and levels of distress among users. In this work, we use natural language processing tools to automatically extract informal topics within posts discussing suicidal ideation and the responses to these posts. Our evaluation demonstrates that frequent topics within the posts represent psychiatrically defined risk factors for suicide, and frequent topics within the responses represent CDC recommended responses to suicidal ideation based on identified protective factors.