Sci-Tech

Be wary of the ethical risks of commercializing generative AI

2025-05-15   

Generative artificial intelligence (AIGC) with large models as its core is accelerating its integration into business scenarios, but the ethical issues arising from the process are becoming increasingly prominent, especially in areas such as algorithmic "black boxes", data abuse, and responsibility evasion, which exhibit obvious market driven characteristics. Institutional governance is urgently needed to address the failure of new technological markets. The author has compiled the manifestations of AIGC ethical risks in the context of commercialization: the unclear property rights of data elements have led to data abuse and technological "black boxes". The core digital production factor of data has not yet achieved a clear mechanism for ownership and reasonable pricing. Platform enterprises can acquire user data at low cost through methods such as fuzzy authorization and cross platform crawling, but users lack control over the data. Under this structural asymmetry, AIGC products are widely embedded in business processes through the SaaS model, with highly closed and opaque algorithm logic, forming a technical "black box". Users passively contribute data without knowledge, and their right to know and choose is not effectively guaranteed. ——The relatively lagging corporate governance structure exacerbates the retreat of ethical boundaries. Some enterprises still adhere to traditional industrial logic, guided by profit and scale, and have not fully incorporated ethical governance into their corporate strategies, either being marginalized or becoming mere formalities. Driven by commercialization pressure, some companies choose to apply AIGC technology in sensitive areas, such as deep forgery, emotional manipulation, consumption induction, etc., to manipulate user decisions and even affect public cognition. Although there are short-term benefits, they undermine long-term social trust and ethical order. ——The regulatory rules are not yet perfect, resulting in governance gaps and a vacuum of responsibility. The existing regulatory system has not yet fully adapted to the rapid evolution of AIGC in terms of division of responsibilities, technical understanding, and enforcement measures, allowing some enterprises to advance their business in regulatory blind spots. When generating content that triggers controversy, platforms often use the excuses of "technological neutrality" and "non-human control" to evade responsibility, resulting in an imbalance between social risks and economic interests, which weakens public confidence in governance mechanisms. ——The algorithm training mechanism has biases, fixed biases, and value misalignments. Enterprises often use historical data for model training due to efficiency and economic considerations. Without a bias control mechanism, it can easily lead to fixed bias in algorithm output. In processes such as advertising recommendations, talent screening, and information distribution, such biases may further reinforce the tendency towards labeling, affect the rights of specific groups, and even lead to deviations in social value perception. ——The weak foundation of social cognition contributes to the spillover of ethical risks. Most users lack understanding of the working principle and potential risks of AIGC technology, making it difficult to identify false information and potential guiding behaviors. The failure of multiple parties such as education, media, and platforms to form a joint effort to promote ethical literacy has made the public more susceptible to false belief and misleading, providing a low resistance environment for AIGC abuse, and the risk quickly spreads to the level of public opinion and cognitive security. So, how can we improve the design of ethical risk governance system to ensure that technology is for the good? The author believes that to solve the ethical risk dilemma in the commercial application of AIGC, it is necessary to start from multiple dimensions such as property rights system, corporate governance, regulatory system, algorithm mechanism, and public literacy, and build a systematic governance architecture that covers the entire process from front to back, combining points and surfaces, to achieve forward-looking warning and structural mitigation of ethical risks. Firstly, establish a data property rights and pricing mechanism to crack the black box of data abuse and technology. We should accelerate the promotion of legislation on data element ownership, clarify the boundaries of ownership, use, and transaction rights of data, and ensure the complete rights chain of users' "data knowledge authorization revocation traceability"; Establish a unified data trading platform and explicit pricing mechanism, enabling users to actively manage and price their own data; Promote the platform to disclose the algorithm operation mechanism or provide interpretable disclosure, and establish an information source annotation mechanism to enhance the transparency of AIGC operation and users' perception ability. Secondly, reform the corporate governance structure and embed ethical responsibility and value orientation. Suggest incorporating AI ethical governance into corporate strategic issues, establishing algorithm ethics committees and moral responsibility officers, and strengthening the embedded management of ethics from the organizational structure level; Establish a pre mechanism for "technology ethics assessment" to conduct ethical impact assessments before product design and deployment, ensuring reasonable value orientation and clear safety boundaries; Introduce an ethical audit system and incorporate ethical practices into the ESG performance evaluation system; Encourage top platforms to publish ethical practice reports, create industry demonstration effects, and guide enterprises to achieve "good innovation". Once again, strengthen cross departmental collaborative supervision, narrow governance gaps and ambiguous areas of responsibility. We should establish a cross departmental regulatory coordination mechanism as soon as possible, jointly form an AIGC comprehensive governance group, and coordinate the formulation and implementation of regulations; Accelerate the introduction of special regulations on content recognition, data ownership definition, algorithm responsibility attribution, etc., and clarify the main responsibility of the platform in generating content; The principle of "presumed responsibility" can be applied to the content generated by AIGC, which means that if the platform cannot prove no fault, it must bear corresponding responsibility to prevent enterprises from using the name of "algorithm automatic generation" to evade governance obligations, and establish a full chain governance system that combines pre prevention, in-process supervision, and post accountability. At the same time, improve training data governance rules to eliminate algorithm bias and value misalignment. Authoritative third parties should take the lead in establishing a public training corpus, providing diverse, credible, and audited corpus resources for enterprises to use, and improving the ethical quality of basic data; Mandatory disclosure of training data sources, de biased techniques, and value audit processes by enterprises, and establishment of algorithm filing mechanisms to strengthen external supervision; Promote enterprises to introduce multiple indicators such as fairness and diversity in algorithm goals, change the current single business orientation mainly based on "click through rate" and "dwell time", and build a value balanced AIGC application logic. Finally, it is necessary to enhance public digital literacy and solidify the foundation of consensus based ethical governance. AI ethics and algorithm literacy education should be incorporated into the curriculum system of primary and secondary schools as well as universities, and social forces such as media, industry associations, and public welfare organizations should be supported to participate in AI ethics governance. Through the establishment of "public technology observation teams" and "ethical risk reporting windows", efforts should be made to promote the normalization of civilian supervision; Encourage platforms to establish ethical education and risk warning mechanisms, timely release technical interpretations and ethical guidelines for AIGC hot applications, alleviate public anxiety, and enhance the overall ability of society to identify and prevent AIGC. The commercial application of generative artificial intelligence is a significant opportunity for the integration of technological progress and economic development, as well as a severe test of the ethical governance system. Only by integrating development and standardization with the concept of system governance, strengthening institutional design and responsibility implementation, can we promote technological innovation while maintaining ethical bottom lines and cultivating a safe, sustainable, and trustworthy digital economy ecosystem. (Author: Professor Li Dayuan from the School of Business, Central South University, and PhD student from the School of Business, Central South University, Su Ya Department) (New Press)

Edit:He Chuanning Responsible editor:Su Suiyue

Source:Guang Ming Daily

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links