Create tags

From Stampy's Wiki

These 61 tags are used but have not had their tag pages created.

All uncreated tags

Create: Academia - Used by 3: Are there any courses on technical AI safety topics?, Dpaleka's Answer to Are there any courses on technical AI safety topics?, How likely is it that AGI will first be developed by a large established organization, rather than a small startup, an academic group or a government?
Create: Acausal trade - Used by 1: How might "acausal trade" affect alignment?
Create: Actors - Used by 4: Does it make sense to focus on scenarios where change is rapid and due to a single actor, or slower and dependent on getting agreements between several relevant actors?, How likely is it that AGI will first be developed by a large established organization, rather than a small startup, an academic group or a government?, What are some of the leading AI capabilities organizations?, Which country will AGI likely be created by, and does this matter?
Create: Agency - Used by 4: Linnea's Answer to What is an "agent"?, How might things go wrong with AI even without an agentic superintelligence?, What is an "agent"?, What's meant by calling an AI "agenty" or "agentlike"?
Create: Agent foundations - Used by 3: If intelligence is the ability to predict observations, should I quantify this in terms of a deterministic or probabilistic prediction function?, Plex's Answer to What is "agent foundations"?, What is "agent foundations"?
Create: Agi fire alarm - Used by 4: How would I know if AGI were imminent?, Should we expect "warning shots" before an unrecoverable catastrophe?, What would a world shortly before AGI look like?, Would "warning shots" make a difference and, if so, would they be helpful or harmful?
Create: Algorithmic progress - Used by 1: How software- and/or hardware-bottlenecked are we on AGI?
Create: Alignment - Used by 1: Which organizations are working on AI safety?
Create: Alignment proposals - Used by 1: What alignment strategies are scalably safe and competitive?
Create: Alignment targets - Used by 1: What should the first AGI systems be aligned to do?
Create: Arms race - Used by 1: Which military applications of AI are likely to be developed?
Create: Biological cognitive enhancement - Used by 1: Could we get significant biological intelligence enhancements long before AGI?
Create: Brain - Used by 1: Which modern brain measurement technique would I use to train transformers to predict my brain state??
Create: Civilization - Used by 2: Answer to What harm could a single superintelligence do when it took so many humans to build civilization?, What harm could a single superintelligence do when it took so many humans to build civilization?
Create: Communication - Used by 1: What’s a good AI alignment elevator pitch?
Create: Community - Used by 1: How can I contribute in the area of community building?
Create: Compute - Used by 2: Jrmyp's Answer to What are "scaling laws" and how are they relevant to safety?, How software- and/or hardware-bottlenecked are we on AGI?
Create: Content - Used by 2: Jrmyp's Answer to What are some good podcasts about AI alignment?, What are some good podcasts about AI alignment?
Create: Creativity - Used by 2: Aprillion's Answer to Can AI be creative?, Can AI be creative?
Create: Cybersecurity - Used by 1: How might AGI interface with cybersecurity?
Create: Debate - Used by 2: Plex's Answer to What is AI Safety via Debate?, What is AI Safety via Debate?
Create: Deep learning - Used by 1: Nico Hill2's Answer to Would AI alignment be hard with deep learning?
Create: Deepmind - Used by 2: Answer to Superintelligence sounds like science fiction. Do people think about this in the real world?, Plex's Answer to What are some of the most impressive recent advances in AI capabilities?
Create: Differential technological development - Used by 1: How possible (and how desirable) is it to change which path humanity follows to get to AGI?
Create: Do what i mean - Used by 2: How might a real-world AI system that receives orders in natural language and does what you mean look?, What is "Do What I Mean"?
Create: Emotions - Used by 1: Could AI have basic emotions?
Create: Ethics - Used by 3: Nico Hill2's Answer to What are the ethical challenges related to whole brain emulation?, What are the ethical challenges related to whole brain emulation?, What are the leading theories in moral philosophy and which of them might be technically the easiest to encode into an AI?
Create: Eutopia - Used by 4: Gelisam's Answer to What would a good future with AGI look like?, Might trying to build a hedonium-maximizing AI be easier and more likely to work than trying for eudaimonia?, What beneficial things would an aligned superintelligence be able to do?, What would a good future with AGI look like?
Create: Friendly ai - Used by 4: Answer to Can we add "friendliness" to any artificial intelligence design?, Answer to What is "friendly AI"?, Can we add "friendliness" to any artificial intelligence design?, What is "friendly AI"?
Create: Goals - Used by 2: Answer to What can we expect the motivations of a superintelligent machine to be?, What can we expect the motivations of a superintelligent machine to be?
Create: Governance - Used by 3: Does it make sense to focus on scenarios where change is rapid and due to a single actor, or slower and dependent on getting agreements between several relevant actors?, How does the field of AI Safety want to accomplish its goal of preventing existential risk? By establishing best practises, institutions & processes, awareness, regulation, certification, etc?, Which organizations are working on AI policy?
Create: Government - Used by 7: Are any major politicians concerned about this?, Are there any AI alignment projects which governments could usefully put a very large amount of resources into?, Does it make sense to focus on scenarios where change is rapid and due to a single actor, or slower and dependent on getting agreements between several relevant actors?, How does the field of AI Safety want to accomplish its goal of preventing existential risk? By establishing best practises, institutions & processes, awareness, regulation, certification, etc?, How likely is it that AGI will first be developed by a large established organization, rather than a small startup, an academic group or a government?, How likely is it that governments will play a significant role? What role would be desirable, if any?, How tractable is it to get governments to play a good role (rather than a bad role) and/or to get them to play a role at all (rather than no role)?
Create: Hedonium - Used by 4: Might an aligned superintelligence immediately kill everyone and then go on to create a "hedonium shockwave"?, Might trying to build a hedonium-maximizing AI be easier and more likely to work than trying for eudaimonia?, TapuZuko's Answer to Might an aligned superintelligence immediately kill everyone and then go on to create a "hedonium shockwave"?, What is "hedonium"?
Create: Implementation - Used by 1: Plex's Answer to Why can’t we just…
Create: Information security - Used by 2: How important is research closure and OPSEC for capabilities-synergistic ideas?, How might we reduce the diffusion of dangerous AI technology to insufficiently careful actors?
Create: Infrastructure - Used by 1: How does the current global microchip supply chain work, and who has political power over it?
Create: Inside view - Used by 2: Helenator's Answer to How do I form my own views about AI safety?, How do I form my own views about AI safety?
Create: Institutions - Used by 3: How does the field of AI Safety want to accomplish its goal of preventing existential risk? By establishing best practises, institutions & processes, awareness, regulation, certification, etc?, How likely are AI organizations to respond appropriately to the risks of their creations?, How successfully have institutions managed risks from novel technology in the past?
Create: Intelligence amplification - Used by 1: Are there promising ways to make AI alignment researchers smarter?
Create: Investmenting - Used by 1: How should I change my financial investments in response to the possibility of transformative AI?
Create: Megaprojects - Used by 1: Are there any AI alignment projects which governments could usefully put a very large amount of resources into?
Create: Mentorship - Used by 2: QZ's Answer to Where can I find mentorship and advice for becoming a researcher?, Where can I find mentorship and advice for becoming a researcher?
Create: Metaethics - Used by 1: How is metaethics relevant to AI alignment?
Create: Metaphilosophy - Used by 1: What is "metaphilosophy" and how does it relate to AI safety?
Create: Metaphors - Used by 2: Plex's Answer to Is there a danger in anthropomorphizing AI’s and trying to understand them in human terms?, Is there a danger in anthropomorphizing AI’s and trying to understand them in human terms?
Create: Multipolar - Used by 1: Does it make sense to focus on scenarios where change is rapid and due to a single actor, or slower and dependent on getting agreements between several relevant actors?
Create: Neural networks - Used by 2: Are there any courses on technical AI safety topics?, Dpaleka's Answer to Are there any courses on technical AI safety topics?
Create: Objections - Used by 3: Plex's Answer to What are some objections to the importance of AI alignment?, What are some common objections to the need for AI alignment, and brief responses to these?, What are some objections to the importance of AI alignment?
Create: Open problems - Used by 1: What are some open research questions in AI alignment?
Create: Openai - Used by 1: Linnea's Answer to What does Elon Musk think about AI safety?
Create: Outreach - Used by 1: What convinced people working on AI alignment that it was worth spending their time on this cause?
Create: Paradigm - Used by 2: If AGI comes from a new paradigm, how likely is it to arise late in the paradigm when it is already deployed at scale, versus early on when only a few people are exploring the idea?, To what extent are there meaningfully different paths to AGI, versus just one path?
Create: Pattern-matching - Used by 2: Plex's Answer to How might non-agentic GPT-style AI cause an "intelligence explosion" or otherwise contribute to existential risk?, How might non-agentic GPT-style AI cause an "intelligence explosion" or otherwise contribute to existential risk?
Create: Person-affecting view - Used by 2: ElloMelon's Answer to If I only care about helping people alive today, does AI safety still matter?, If I only care about helping people alive today, does AI safety still matter?
Create: Personal action - Used by 1: How should I personally prepare for when transformative AI arrives?
Create: Philosophy - Used by 1: What are some problems in philosophy that are related to AI safety?
Create: Physics - Used by 1: What could a superintelligent AI do, and what would be physically impossible even for it?
Create: Politics - Used by 1: Casejp's Answer to Should I engage in political or collective action like signing petitions or sending letters to politicians?
Create: Preferences - Used by 2: Answer to Can't we just tell an AI to do what we want?, Can't we just tell an AI to do what we want?
Create: Productivity - Used by 3: Could I contribute by offering coaching to alignment researchers? If so, how would I go about this?, How can I be a more productive student/researcher?, How can I support alignment researchers to be more productive?
Create: Regulation - Used by 3: Are there any plausibly workable proposals for regulating or banning dangerous AI research?, How does the field of AI Safety want to accomplish its goal of preventing existential risk? By establishing best practises, institutions & processes, awareness, regulation, certification, etc?, Is there something useful we can ask governments to do for AI alignment?
Create: Research assistants - Used by 2: Are there promising ways to make AI alignment researchers smarter?, Could emulated minds do AI alignment research?
Create: Robustness - Used by 2: Are there any courses on technical AI safety topics?, Dpaleka's Answer to Are there any courses on technical AI safety topics?
Create: Security mindset - Used by 3: Answer to Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world?, Plex's Answer to Why can’t we just…, Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world?
Create: Simulation hypothesis - Used by 2: Is the question of whether we're living in a simulation relevant to AI safety? If so, how?, TapuZuko's Answer to Is the question of whether we're living in a simulation relevant to AI safety? If so, how?
Create: Singleton - Used by 2: Linnea's Answer to What does Elon Musk think about AI safety?, Does it make sense to focus on scenarios where change is rapid and due to a single actor, or slower and dependent on getting agreements between several relevant actors?
Create: Solutions - Used by 2: Answer to What would a good solution to AI alignment look like?, What would a good solution to AI alignment look like?
Create: Stable win condition - Used by 2: Plex's Answer to If we solve alignment, are we sure of a good future?, If we solve alignment, are we sure of a good future?
Create: Success models - Used by 2: Plex's Answer to If we solve alignment, are we sure of a good future?, If we solve alignment, are we sure of a good future?
Create: Tech companies - Used by 1: How likely is it that AGI will first be developed by a large established organization, rather than a small startup, an academic group or a government?
Create: Technological unemployment - Used by 2: Quintin Pope's Answer to Will superintelligence make a large part of humanity unemployable?, Will superintelligence make a large part of humanity unemployable?
Create: Transhumanism - Used by 1: Does the importance of AI risk depend on caring about transhumanist utopias?
Create: Values - Used by 3: Luca's Answer to What is a "value handshake"?, Linnea's Answer to Will an aligned superintelligence care about animals other than humans?, What assets need to be protected by/from the AI? Are "human values" sufficient for it?