Wed, 27 December 2023
We are joined by Darren McKee, a Policy Advisor and the host of Reality Check — a critical thinking podcast. Darren gave a background about himself and how he got into the AI space. Darren shared his thoughts on AGI's achievements in the coming years. He defined AGI and discussed how to differentiate an AGI system. He also shared whether AI needs consciousness to be AGI. Darren discussed his worry about AI surpassing human understanding of the universe and potentially causing harm to humanity. He also shared examples of how AI is already used for nefarious purposes. He explored whether AI possesses inherently evil intentions and gave his thoughts on regulating AI. |
Sat, 23 December 2023
It took a massive financial investment for the first large language models (LLMs) to be created. Did their corporate backers lock these tools away for all but the richest? No. They provided comodity priced API options for using them. Anyone can talk to Chat GPT or Bing. What if you want to go a step beyond that and do something programatic? Kyle explores your options in this episode. |
Mon, 18 December 2023
We celebrate episode 1000000000 with some Q&A from host Kyle Polich. We boil this episode down to four key questions: 1) How do you find guests 2) What is Data Skeptic all about? 3) What is Kyle all about? 4) What are Kyle's thoughts on AGI?
Thanks to our sponsors
|
Mon, 11 December 2023
In this episode, we are joined by Amir Netz, a Technical Fellow at Microsoft and the CTO of Microsoft Fabric. He discusses how companies can use Microsoft's latest tools for business intelligence. Amir started by discussing how business intelligence has progressed in relevance over the years. Amir gave a brief introduction into what Power BI and Fabric are. He also discussed how Fabric distinguishes from other BI tools by building an end-to-end tool for the data journey. Amir spoke about the process of building and deploying machine learning models with Microsoft Fabric. He shared the difference between Software as a Service (SaaS) and Platform as a Service (PaaS). Amir discussed the benefits of Fabric's auto-integration and auto-optimization abilities. He also discussed the capabilities of Copilot in Fabric. He also discussed exciting future developments planned for Fabric. Amir shared techniques for limiting Copilot hallucination. |
Mon, 4 December 2023
Our guest today is Eric Boyd, the Corporate Vice President of AI at Microsoft. Eric joins us to share how organizations can leverage AI for faster development. Eric shared the benefits of using natural language to build products. He discussed the future of version control and the level of AI background required to get started with Azure AI. He mentioned some foundational models in Azure AI and their capabilities. Follow Eric on LinkedIn to learn more about his work. Visit today's sponsor at https://webai.com/dataskeptic |
Mon, 27 November 2023
We are excited to be joined by Aaron Reich and Priyanka Shah. Aaron is the CTO at Avanade, while Priyanka leads their AI/IoT offering for the SEA Region. Priyanka is also the MVP for Microsoft AI. They join us to discuss how LLMs are deployed in organizations. |
Mon, 20 November 2023
In this episode, we are joined by Jenny Liang, a PhD student at Carnegie Mellon University, where she studies the usability of code generation tools. She discusses her recent survey on the usability of AI programming assistants. Jenny discussed the method she used to gather people to complete her survey. She also shared some questions in her survey alongside vital takeaways. She shared the major reasons for developers not wanting to us code-generation tools. She stressed that the code-generation tools might access the software developers' in-house code, which is intellectual property. Learn more about Jenny Liang via https://jennyliang.me/
|
Mon, 13 November 2023
We are joined by Aman Madaan and Shuyan Zhou. They are both PhD students at the Language Technology Institute at Carnegie Mellon University. They join us to discuss their latest published paper, PAL: Program-aided Language Models. Aman and Shuyan started by sharing how the application of LLMs has evolved. They talked about the performance of LLMs on arithmetic tasks in contrast to coding tasks. Aman introduced their PAL model and how it helps LLMs improve at arithmetic tasks. He shared examples of the tasks PAL was tested on. Shuyan discussed how PAL’s performance was evaluated using Big Bench hard tasks. They discussed the kind of mistakes LLMs tend to make and how the PAL’s model circumvents these limitations. They also discussed how these developments in LLMS can improve kids learning. Rounding up, Aman discussed the CoCoGen project, a project that enables NLP tasks to be converted to graphs. Shuyan and Aman shared their next research steps. Follow Shuyan on Twitter @shuyanzhxyc. Follow Aman on @aman_madaan. |
Mon, 6 November 2023
In this episode, we have Alessio Buscemi, a software engineer at Lifeware SA. Alessio was a post-doctoral researcher at the University of Luxembourg. He joins us to discuss his paper, A Comparative Study of Code Generation using ChatGPT 3.5 across 10 Programming Languages. Alessio shared his thoughts on whether ChatGPT is a threat to software engineers. He discussed how LLMs can help software engineers become more efficient.
Direct download: which-programming-language-is-chatgpt-best-at.mp3
Category:general -- posted at: 6:00am PST |
Tue, 31 October 2023
On the show today, we are joined by Jianan Zhao, a Computer Science student at Mila and the University of Montreal. His research focus is on graph databases and natural language processing. He joins us to discuss how to use graphs with LLMs efficiently.
|
Mon, 23 October 2023
Today, we are joined by Rajiv Movva, a PhD student in Computer Science at Cornell Tech University. His research interest lies in the intersection of responsible AI and computational social science. He joins to discuss the findings of this work that analyzed LLM publication patterns. He shared the dataset he used for the survey. He also discussed the conditions for determining the papers to analyze. Rajiv shared some of the trends he observed from his analysis. For one, he observed there has been an increase in LLMs research. He also shared the proportions of papers published by universities, organizations, and industry leaders in LLMs such as OpenAI and Google. He mentioned the majority of the papers are centered on the social impact of LLMs. He also discussed other exciting application of LLMs such as in education. |
Mon, 16 October 2023
We are excited to be joined by Josh Albrecht, the CTO of Imbue. Imbue is a research company whose mission is to create AI agents that are more robust, safer, and easier to use. He joins us to share findings of his work; Despite "super-human" performance, current LLMs are unsuited for decisions about ethics and safety.
|
Mon, 9 October 2023
On today’s show, we are joined by Thilo Hagendorff, a Research Group Leader of Ethics of Generative AI at the University of Stuttgart. He joins us to discuss his research, Deception Abilities Emerged in Large Language Models. Thilo discussed how machine psychology is useful in machine learning tasks. He shared examples of cognitive tasks that LLMs have improved at solving. He shared his thoughts on whether there’s a ceiling to the tasks ML can solve. |
Mon, 2 October 2023
Nieves Montes, a Ph.D. student at the Artificial Intelligence Research Institute in Barcelona, Spain, joins us. Her PhD research revolves around value-based reasoning in relation to norms. She shares her latest study, Combining theory of mind and abductive reasoning in agent‑oriented programming.
Direct download: agents-with-theory-of-mind-play-hanabi.mp3
Category:general -- posted at: 6:58am PST |
Mon, 25 September 2023
We are joined by Maximilian Mozes, a PhD student at the University College, London. His PhD research focuses on Natural Language Processing (NLP), particularly the intersection of adversarial machine learning and NLP. He joins us to discuss his latest research, Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities. |
Mon, 11 September 2023
Our guest today is Vid Kocijan, a Machine Learning Engineer at Kumo AI. Vid has a Ph.D. in Computer Science at the University of Oxford. His research focused on common sense reasoning, pre-training in LLMs, pretraining in knowledge-based completion, and how these pre-trainings impact societal bias. He joins us to discuss how he built a BERT model that solved the Winograd Schema Challenge.
Direct download: the-defeat-of-the-winograd-schema-challenge.mp3
Category:general -- posted at: 6:33am PST |
Mon, 4 September 2023
Today, We are joined by Petter Törnberg, an Assistant Professor in Computational Social Science at the University of Amsterdam and a Senior Researcher at the University of Neuchatel. His research is centered on the intersection of computational methods and their applications in social sciences. He joins us to discuss findings from his research papers, ChatGPT-4 Outperforms Experts and Crowd Workers in Annotating Political Twitter Messages with Zero-Shot Learning, and How to use LLMs for Text Analysis. |
Mon, 28 August 2023
In this episode, we are joined by Carlos Hernández Oliván, a Ph.D. student at the University of Zaragoza. Carlos’s interest focuses on building new models for symbolic music generation. Carlos shared his thoughts on whether these models are genuinely creative. He revealed situations where AI-generated music can pass the Turing test. He also shared some essential considerations when constructing models for music composition. |
Mon, 21 August 2023
Hongyi Wang, a Senior Researcher at the Machine Learning Department at Carnegie Mellon University, joins us. His research is in the intersection of systems and machine learning. He discussed his research paper, Cuttlefish: Low-Rank Model Training without All the Tuning, on today’s show. Hogyi started by sharing his thoughts on whether developers need to learn how to fine-tune models. He then spoke about the need to optimize the training of ML models, especially as these models grow bigger. He discussed how data centers have the hardware to train these large models but not the community. He then spoke about the Low-Rank Adaptation (LoRa) technique and where it is used. Hongyi discussed the Cuttlefish model and how it edges LoRa. He shared the use cases of Cattlefish and who should use it. Rounding up, he gave his advice on how people can get into the machine learning field. He also shared his future research ideas. |
Tue, 15 August 2023
On today’s episode, we have Daniel Rock, an Assistant Professor of Operations Information and Decisions at the Wharton School of the University of Pennsylvania. Daniel’s research focuses on the economics of AI and ML, specifically how digital technologies are changing the economy. Daniel discussed how AI has disrupted the job market in the past years. He also explained that it had created more winners than losers. Daniel spoke about the empirical study he and his coauthors did to quantify the threat LLMs pose to professionals. He shared how they used the O-NET dataset and the BLS occupational employment survey to measure the impact of LLMs on different professions. Using the radiology profession as an example, he listed tasks that LLMs could assume. Daniel broadly highlighted professions that are most and least exposed to LLMs proliferation. He also spoke about the risks of LLMs and his thoughts on implementing policies for regulating LLMs.
Direct download: which-professions-are-threatened-by-llms.mp3
Category:general -- posted at: 5:00am PST |
Tue, 8 August 2023
We are excited to be joined by J.D. Zamfirescu-Pereira, a Ph.D. student at UC Berkeley. He focuses on the intersection of human-computer interaction (HCI) and artificial intelligence (AI). He joins us to share his work in his paper, Why Johnny can’t prompt: how non-AI experts try (and fail) to design LLM prompts. The discussion also explores lessons learned and achievements related to BotDesigner, a tool for creating chat bots. |
Mon, 31 July 2023
In this episode, we are joined by Ryan Liu, a Computer Science graduate of Carnegie Mellon University. Ryan will begin his Ph.D. program at Princeton University this fall. His Ph.D. will focus on the intersection of large language models and how humans think. Ryan joins us to discuss his research titled "ReviewerGPT? An Exploratory Study on Using Large Language Models for Paper Reviewing" |
Mon, 24 July 2023
The creators of large language models impose restrictions on some of the types of requests one might make of them. LLMs commonly refuse to give advice on committing crimes, producting adult content, or respond with any details about a variety of sensitive subjects. As with any content filtering system, you have false positives and false negatives. Today's interview with Max Reuter and William Schulze discusses their paper "I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box Generative Language Models". In this work, they explore what types of prompts get refused and build a machine learning classifier adept at predicting if a particular prompt will be refused or not. |
Tue, 18 July 2023
Our guest today is Maciej Świechowski. Maciej is affiliated with QED Software and QED Games. He has a Ph.D. in Systems Research from the Polish Academy of Sciences. Maciej joins us to discuss findings from his study, Deep Learning and Artificial General Intelligence: Still a Long Way to Go. |
Mon, 10 July 2023
Today on the show, we are joined by Lin Zhao and Lu Zhang. Lin is a Senior Research Scientist at United Imaging Intelligence, while Lu is a Ph.D. candidate at the Department of Computer Science and Engineering at the University of Texas. They both shared findings from their work When Brain-inspired AI Meets AGI. Lin and Lu began by discussing the connections between the brain and neural networks. They mentioned the similarities as well as the differences. They also shared whether there is a possibility for solid advancements in neural networks to the point of AGI. They shared how understanding the brain more can help drive robust artificial intelligence systems. Lin and Lu shared how the brain inspired popular machine learning algorithms like transformers. They also shared how AI models can learn alignment from the human brain. They juxtaposed the low energy usage of the brain compared to high-end computers and whether computers can become more energy efficient. |
Mon, 3 July 2023
On today’s show, we are joined by Michael Timothy Bennett, a Ph.D. student at the Australian National University. Michael’s research is centered around Artificial General Intelligence (AGI), specifically the mathematical formalism of AGIs. He joins us to discuss findings from his study, Computable Artificial General Intelligence. |
Mon, 26 June 2023
We are joined by Koen Holtman, an independent AI researcher focusing on AI safety. Koen is the Founder of Holtman Systems Research, a research company based in the Netherlands. Koen started the conversation with his take on an AI apocalypse in the coming years. He discussed the obedience problem with AI models and the safe form of obedience. Koen explained the concept of Markov Decision Process (MDP) and how it is used to build machine learning models. Koen spoke about the problem of AGIs not being able to allow changing their utility function after the model is deployed. He shared another alternative approach to solving the problem. He shared how to engineer AGI systems now and in the future safely. He also spoke about how to implement safety layers on AI models. Koen discussed the ultimate goal of a safe AI system and how to check that an AI system is indeed safe. He discussed the intersection between large language Models (LLMs) and MDPs. He shared the key ingredients to scale the current AI implementations. |
Mon, 19 June 2023
An assistant professor of Psychology at Harvard University, Tomer Ullman, joins us. Tomer discussed the theory of mind and whether machines can indeed pass it. Using variations of the Sally-Anne test and the Smarties tube test, he explained how LLMs could fail the theory of mind test. |
Mon, 12 June 2023
The application of LLMs cuts across various industries. Today, we are joined by Steven Van Vaerenbergh, who discussed the application of AI in mathematics education. He discussed how AI tools have changed the landscape of solving mathematical problems. He also shared LLMs' current strengths and weaknesses in solving math problems. |
Tue, 6 June 2023
Fabricio Goes, a Lecturer in Creative Computing at the University of Leicester, joins us today. Fabricio discussed what creativity entails and how to evaluate jokes with LLMs. He specifically shared the process of evaluating jokes with GPT-3 and GPT-4. He concluded with his thoughts on the future of LLMs for creative tasks. |
Mon, 29 May 2023
Barry Smith and Jobst Landgrebe, authors of the book “Why Machines will never Rule the World,” join us today. They discussed the limitations of AI systems in today’s world. They also shared elaborate reasons AI will struggle to attain the level of human intelligence.
Direct download: why-machines-will-never-rule-the-world.mp3
Category:general -- posted at: 3:17pm PST |
Mon, 22 May 2023
While the possibilities with AGI emergence seem great, it also calls for safety concerns. On the show, Vahid Behzadan, an Assistant Professor of Computer Science and Data Science, joins us to discuss the complexities of modeling AGIs to accurately achieve objective functions. He touched on tangent issues such as abstractions during training, the problem of unpredictability, communications among agents, and so on.
Direct download: a-psychopathological-approach-to-safety-in-agi.mp3
Category:general -- posted at: 11:59pm PST |
Mon, 15 May 2023
Julian Michael, a postdoc at the Center for Data Science, New York University, joins us today. Julian’s conversation with Kyle was centered on the NLP community metasurvey: a survey aimed at understanding expert opinions on controversial NLP issues. He shared the process of preparing the survey as well as some shocking results. |
Tue, 9 May 2023
Kyle shares his own perspectives on challenges getting insight from surveys. The discussion ranges from commentary on the market research industry to specific advice for detecting disingenuous or fraudulent responses and filtering them from your analysis. Finally, he shares some quick thoughts on the usage of the Chi-Square test for interpreting cross tab results in survey analysis. |
Mon, 1 May 2023
Jeff Jones, a Senior Editor at Gallup, joins us today. His conversation with Kyle spanned a range of topics on Gallup’s poll creation process. He discussed how Gallup generates unbiased questionnaires, gets respondents, analyzes results, and everything in between. |
Mon, 24 April 2023
Gireeja Ranade, a University of California at Berkeley professor, speaks with us today. She presented her study on implementing inclusive study groups at scale and shared the observed student performance improvements after the intervention.
Direct download: inclusive-study-group-formation-at-scale.mp3
Category:general -- posted at: 9:34pm PST |
Fri, 21 April 2023
Today, we are joined by David Bourget. David is an Associate Professor in Philosophy at Western University in London, Ontario. David is also the co-director of the PhilPapers Foundation and Director of the Center for Digital Philosophy. He joins us to discuss the PhilPapers Survey project. The PhilPapers survey was initially taken in 2009, but there was a follow-up survey in 2020. David discussed the need for the subsequent survey and what changed. He mentioned the metric for measuring the opinion changes between the 2009 and 2020 surveys. He also shared future plans for the PhilPapers surveys. |
Mon, 10 April 2023
Today’s show focused on an essential part of surveys — missing values. This is typically caused by a low response rate or non-response from respondents. Yajuan Si is a Research Associate Professor at the Survey Research Center at the University of Michigan. She joins us to discuss dealing with bias from low survey response rates. |
Mon, 3 April 2023
We are joined by two guests today, Mariah, a Ph.D. student in the CORE Robotics Lab at Georgia Tech, and Matthew Gombolay, the Director of the CORE Robotics Lab. They both discuss practices for measuring a respondent’s perception in a survey.
Direct download: measuring-trust-in-robots-with-likert-scales.mp3
Category:general -- posted at: 11:05am PST |
Mon, 27 March 2023
Ever wondered what your next career would be? Today, Keyon Vafa, a computer science Ph.D. student at Columbia University, joins us to discuss his latest research on developing a machine-learning model for career prediction. Keyon extensively spoke about how the model was developed and the possibilities it brings. |
Mon, 20 March 2023
Noura Insolera, a Research Investigator with the Panel Study of Income Dynamics (PSID), joins us to share how PSID conducts longitudinal household surveys. She also shared some interesting findings from their data exploration, particularly on the observation and trends in food insecurity. |
Tue, 14 March 2023
Susan Gerbic joins Kyle to review some of the surveys Data Skeptic has launch, draft a new survey about podcast listening habits, and then review the results of that survey. You can see those results at the link below. https://survey.dataskeptic.com/survey/result/1675102237053 Watch the videos Susan mentioned on her Youtube page at the link below. https://www.youtube.com/playlist?list=PL7VAuaQDhPTVaLeI1IcpYph5lH19xA1u4 |
Mon, 6 March 2023
The use of social bots to fill out online surveys is becoming prevalent. Today, we speak with Sara Bybee, a postdoctoral research scholar at the University of Utah. Sara shares from her research, how she detected social bots, the strategies to curb them, and how underrepresented groups can be more represented in surveys. |
Mon, 20 February 2023
Our guest today is Zoltán Kekecs, a Ph.D. holder in Behavioural Science. Zoltán highlights the problem of low replicability in journal papers and illustrates how researchers can better ensure complete replication of their research and findings. He used Bem’s experiment as an example, extensively talking about his methodology and results. |
Mon, 13 February 2023
On the show, Iñigo Martinez, a Ph.D. student at the University of Navarra shares his survey results which investigated how data practitioners perform data science projects. He revealed the methodologies typically used by data practitioners and the success factors in data science projects.
Direct download: a-survey-of-data-science-methodologies.mp3
Category:general -- posted at: 9:43am PST |
Mon, 6 February 2023
On the show today, Dino Carpentras, a post-doctoral researcher at the Computational Social Science group at ETH Zürich joins us to discuss how opinion dynamics models are built and validated. He explained how quantifying opinions is complex, and strategies to develop robust models for measuring and predicting public opinions. |
Mon, 30 January 2023
Crafting survey questions is one thing but getting your audience to fill it is yet another. On the show today, we speak with Alexander Nolte, an Associate Professor at the University of Tartu. Alexander discussed the use of Casual Affective Triggers (CAT) to incentivize people to accept survey invitations and improve the completion rate. He revealed the impact of CATs on survey response rates from a study he conducted. |
Mon, 23 January 2023
Traditional surveys have straight-jacket questions to be answered, thus restricting the information that can be gotten. Today, Ziang Xiao, a Postdoc Researcher in the FATE group at Microsoft Research Montréal, talks about conversational surveys, a type of survey that asks questions based on preceding answers. He discussed the benefits of conversational surveys and some of the challenges it poses. |
Tue, 17 January 2023
Today, Jenny Tang, a Ph.D. student of societal computing at Carnegie Mellon University discusses her work on the generalization of privacy and security surveys on platforms such as Amazon MTurk and Prolific. Jenny shared the drawbacks of using such online platforms, the discrepancies observed about the samples drawn, and key insights from her results.
Direct download: do-results-generalize-for-privacy-and-security-surveys.mp3
Category:general -- posted at: 12:39pm PST |
Tue, 10 January 2023
This episode kicks off the new season of the show, Data Skeptic: Surveys. Linhda rejoins the show for a conversation with Kyle about her experience taking surveys and what questions she has for the season. Lastly, Kyle announces the launch of survey.dataskeptic.com, a new site we're launching to gather your opinions. Please take a moment and share your thoughts! |