Click the “blue text” above to follow us!
Abstract
Abstract: Architectural renderings can realistically simulate buildings and their effects after design completion, making them a necessary part of design expression in architectural education and a key component of architectural space visualization. However, traditional rendering methods are not only complex in software settings but also require considerable processing time, failing to guarantee a good final result, consuming a lot of human and material resources. This teaching case adopts AI rendering techniques based on neural networks and deep learning algorithms, aiming to guide students to better achieve innovative architectural designs while reducing time and improving efficiency. This teaching case has three innovations: firstly, mode innovation, which promotes the reform of traditional architectural design teaching modes through intelligent enhancement, targeting the cultivation of students’ professional quality abilities; secondly, form innovation, which achieves the visualization of architectural schemes through a symbiosis of virtual and real, providing an effective way to showcase unique design concepts to students; thirdly, content innovation, which, by introducing AI technology as a new productive force, encourages students to create architectural designs with ‘beauty’ more efficiently and with more personal characteristics.
Keywords: AI large model; architectural renderings; AI rendering; architectural education; design innovationAuthors: Chen Zhonggao, Li Haiyan, Yang Jun, Yantai UniversityI. AI Large Model Empowering Architectural Design Innovation
Architectural renderings can realistically simulate buildings and their effects after design completion, making them a necessary part of design expression in architectural education and a key component of architectural space visualization. They are architectural design representation images rendered by computer modeling, but traditional rendering methods rely on computational power to handle the interaction between light and 3D models, which not only complicates software settings but also consumes considerable processing time, failing to guarantee a good final result and consuming a lot of human and material resources.
AI rendering (Artificial Intelligence Rendering) refers to the use of computer learning algorithms to accelerate the creation and generation of realistic images and animations. It relies on neural networks and deep learning algorithms to optimize light interaction, reflection, and texture computation, allowing realistic rendering to be completed in a shorter time. These features can be adjusted in real-time, providing immediate visual feedback, making design changes easier to explore. By using AI rendering, architects and clients can also collaborate more efficiently.
II. Organization of AI Architectural Rendering Teaching
(1) AI Architectural Rendering Teaching Based on Innovative Concepts
Effectively conveying innovative designs has always been a major challenge in architectural education. Ensuring that students can quickly and conveniently create and explore design concepts is key. AI rendering can more easily create architectural visualizations without the need to master complex 3D modeling and rendering software knowledge. These tools describe architectural visions in natural language and then generate realistic images based on these descriptions.
In architectural professional education, conducting AI architectural rendering teaching on students’ completed 3D design sketches can significantly shorten the time required for traditional rendering, improving design efficiency. Moreover, AI rendering can further deepen students’ selection and determination of the original design intent, thus promoting design innovation while ensuring design efficiency.
Therefore, this teaching introduces AI rendering in the early and late stages of design teaching, guiding students to further generate AI large model renderings based on the three-dimensional forms of design intent, quickly visualizing various design concepts, and allowing for repeated refinement.
(2) Application Courses
This teaching case uses the architecture courses “Architectural Design 3-4” and “Parametric Design” as vehicles, targeting third-year students. The “Parametric Design” course serves as software operation teaching, while “Architectural Design 3-4” serves as a professional design practice course. After learning from the former, students apply their knowledge to the design process of the latter, achieving a good combination of theory and practice through the teaching of these two courses.
(3) Stable Diffusion Application Tool
Stable Diffusion is an AI drawing generation tool based on Latent Diffusion Models (LDMs). It converts human language into mathematical vectors that machines can understand, then combines semantic vectors, gradually removing noise from pure noise to generate image information latent variables, and finally converts the image information latent variables into a real image. It is currently the only AI drawing tool that can be deployed on home computers, generating high-resolution AI images in just a few seconds without preprocessing or postprocessing.
III. Tutorial on Architectural Rendering Applications Based on Stable Diffusion
(1) Basic Knowledge
Stable Diffusion includes two methods: generating images from text and generating images from images. For the image-to-image method, this teaching has found through experimentation that this method does not meet the goals of architectural rendering as it cannot achieve precise control, making it unsuitable for quickly obtaining the desired rendering effect while maintaining the original creative intent. Therefore, generating images from text becomes the teaching application method for this case.
For architectural rendering using Stable Diffusion’s text-to-image, it can be combined with ControlNet and Lora, as shown in Figure 1. The former is used to input line drawings, ensuring consistency with the original input intent, while the latter is used for precise control of the rendering style. Through the integration of these three components, the teaching goals of this case can be achieved.
Among these, the prompt words are crucial, including positive prompts and negative prompts. Positive prompts refer to the descriptions of the effects users wish to achieve in the generated images, while negative prompts refer to the content elements users do not want to appear in the generated images.
Figure 1 Stable Diffusion Interface (Top Left: Text-to-Image Selection; Top Right: ControlNet Plugin; Bottom: Lora Plugin)
(2) Parameter Setting Considerations
Parameter settings are shown in Figure 2. The number of iterations can be set higher as needed, but higher is not always better. If set too high, it may generate unpredictable elements. The following suggestions are made: for low-end graphics cards, around 1080p, set the iteration steps to 20; for high-end graphics cards, around 3060p, set the iteration steps to 40. The total batch number and single batch quantity: for high-end graphics cards, increase the single batch quantity, which can be set to 4-6 images. For low-end graphics cards, increase the total batch quantity.
The aspect ratio of the generated image can be set as needed, while other parameters can remain default.
Figure 2 Stable Diffusion Parameter Setting Interface
(3) Detailed Tutorial for Text-to-Image
The following elaborates on the teaching application process for quickly generating diverse renderings through line drawings in this case using Stable Diffusion text-to-image, mainly including three steps: inputting line drawings, inputting prompt words, and outputting rendering images.
1. Input Line Drawings
Activate the ControlNet plugin, click to upload the image, and select the M-LSD line detection and control_sd15_mlsd model from the options. The other settings of ControlNet can be referenced based on the values displayed in the image or can be repeatedly debugged and tested in subsequent steps.
Figure 3 ControlNet Input Line Drawings
2. Input Prompt Words
Load the Lora model, and in the “Text-to-Image” menu under the main interface, input the image elements you want to generate in the “Prompt Words” input box. The prompt words for this case are as follows.
Positive prompt words: building, White concrete, people, Assembled building, coconut palm, Best quality, nature environment, Architectural photography, photorealistic, hyperrealistic, super detailed, 8k, sea, sand beach, Photographers, cinematic photography, ultra-detailed, highly tailored detail, hyper realistic, photorealistic, cinematic, rendering, archdaily, 500px, archdaily, clean sky.
Negative prompt words: signature, soft, blurry, drawing, sketch, poor quality, ugly, text, type, word, logo, pixelated, low resolution, saturated, high contrast, over sharpened, (cloud), dirt. It is important to note that the set width and height must match the aspect ratio of the original image.
3. Output Rendering Images
After running the model, as shown in Figure 4, different target style architectural renderings can be obtained.
Figure 4 Architectural Rendering Generation (Top: Beach Style; Bottom: Model Style)
IV. Mode, Form, and Content Innovation
(1) Mode Innovation: Intelligent Enhancement
As a human production activity, architectural design inevitably requires the implementation of conceptual schemes into the physical world. Examining this transformation process through real construction is not feasible; thus, architectural renderings become the only means to judge the quality of creative proposals and guide real construction, also a key content in architectural design teaching. However, due to traditional rendering technologies consuming a lot of human and time resources, architectural renderings have always been the final product after the scheme is determined, only serving the function of expressing the scheme’s form and unable to serve as a means of architectural design, thus hindering further innovation in architectural forms.
This teaching case introduces AI rendering technology, guiding students to quickly generate desired architectural form intention images through inputting text and simple line drawings, successfully applying it to two specific scenarios in the architectural design process: firstly, in the early stage of architectural design for conceptual scheme creation, utilizing AI’s generative capabilities to achieve multi-directional comparisons in the conceptual phase; secondly, in the later stage of architectural design for scheme expression, by outputting high-quality generated images, reducing the considerable effort required to obtain suitable renderings, allowing students to focus on designing innovative proposals. These two aspects of AI intelligent teaching applications can promote the reform of traditional architectural design teaching modes, significantly enhance design efficiency, and target the cultivation of students’ professional quality abilities.
(2) Form Innovation: Virtual-Real Twin
Architectural renderings are a crucial means of visualizing architectural schemes, providing an effective way to showcase unique design concepts. However, limited by existing technology, teaching about architectural renderings has long been constrained by two aspects: on one hand, the design effect cannot be achieved in real-time to consistently express virtual designs and real environments; on the other hand, students cannot achieve two-way communication with users through renderings. These two issues limit students’ further refinement of scheme forms and prevent them from obtaining real user feedback.
This teaching addresses these issues through the introduction of AI technology, which can guide students in solving these problems through its powerful virtual-real twin capabilities. Firstly, in the later teaching phase, by using technologies such as augmented reality (AR) and virtual reality (VR), students can project design schemes in real-time into real scenes, allowing users to better understand and experience design intentions. Secondly, students can quickly adjust design schemes according to users’ needs and preferences, showcasing different design options in real-time and making modifications and optimizations based on feedback. Thirdly, students can quickly generate dynamic videos based on scheme designs, showcasing the spatial schemes comprehensively, providing better experiences, and improving design efficiency and quality.
(3) Content Innovation: New Productive Forces Promoting Architectural Aesthetic Education
Architectural forms, as an essential part of a beautiful living environment, inevitably make their design results a significant way of practicing aesthetic education. The process of creating architecture itself is also a process of aesthetic education practice for the subjects of aesthetic education and an important way for schools to shape students’ creativity, imagination, and perception. Combining the characteristics of architectural majors and integrating aesthetic education teaching in universities is of significant practical significance for aesthetic education construction.
This teaching case introduces AI technology as a new productive force, providing students with new opportunities for design innovation and expression. Through this creative process, students exercise computational design thinking, expand their creativity and imagination, and acquire key skills in architectural aesthetics training, including: clarifying architectural design themes, refining three-dimensional forms of architecture, applying aesthetic knowledge comprehensively, and experiencing spatial atmospheres. These skills, in the process of using artificial intelligence technology, create architectural design works with ‘beauty’ more efficiently and with more personal characteristics, achieving a new approach to architectural aesthetic education and innovatively enhancing students’ aesthetic literacy.
V. Teaching Effectiveness
This teaching applies to architectural design courses, introducing AI rendering in the early and late stages of design teaching, guiding students to use AI large models for rendering generation, demonstrating strong teaching adaptability and achieving excellent teaching effectiveness. Over the past three years, the teaching leader has guided students in competitions, winning more than 30 national-level awards and over 60 provincial-level awards, including 1 first prize and 2 second prizes in the NCDA Future Designers National College Digital Art Design Competition, 1 first prize in the Milan Design Week Excellent Works Exhibition of Chinese University Design Disciplines, and 3 second prizes in the China Good Ideas National Digital Art Design Competition, among many other awards (Figure 5).
Figure 5 Some Certificates of Awards for Guiding Students in Competitions
Statement
This article is sourced from the Shandong Provincial Institute of Electric Education. The above images and text are valuable for sharing, copyright belongs to the original author and source, and the content only represents the author’s views and does not represent the position of this public account. If there are any copyright issues, please contact us in a timely manner.