Efficient Web-Based Project Topic Booking System for Academic Use

Student Topic Booking

By Shahabuddin Amerudin

Introduction

The Project Booking Web System was created to address the need for a fair, efficient, and organized method of project topic selection for students. This platform, accessible at Project Booking 2024, allows students to reserve topics based on available slots and offers administrators clear insights into student preferences and booking trends. Through real-time updates, comprehensive display of reserved topics, and user-friendly instructions, the system enables students to make informed decisions while ensuring transparency. This article explores the system’s requirements, design, development, implementation, and functional advantages, aiming to highlight how this project booking system enhances both user experience and administrative efficiency.

Requirements Analysis

To start with, requirements analysis was central to the development of the project booking system, identifying critical elements needed to deliver an effective solution for all users. A key requirement was to allow each student to make only one booking, which prevents multiple topic reservations and ensures fair access to available slots. In a context where multiple students may have similar interests, ensuring that each student has equal opportunity to reserve a topic promotes a balanced and equitable experience (Lee, 2021). Additionally, maintaining accuracy in entries was deemed essential; each student was instructed to type their full name accurately to avoid duplicates and inconsistencies, while system validation features ensure duplicate entries are automatically blocked. Slot management was another core requirement. The system was designed to provide six name slots for the first topic and five for all other topics, reflecting both anticipated demand and ensuring adequate space within popular selections (Jackson, 2022). The system’s slot allocation dynamically manages availability, updating in real-time to prevent overbooking. Transparency was emphasized by making all booking records—including student name, date, and time in GMT/UTC—publicly accessible, ensuring both students and administrators have visibility over booked slots. Finally, implementing a first-come, first-served processing model was crucial to meet the fairness requirement, thus prioritizing bookings based on the earliest submissions and reinforcing the equitable distribution of topics.

System Design

The design process emphasized the need for a user-friendly and minimalist interface. The booking page itself is streamlined, focusing solely on available topics and respective slots. With simplicity as a design pillar, the page reduces any cognitive load for students by allowing them to quickly and clearly view their options without unnecessary distractions. Instructions and error-prevention prompts are strategically displayed to prevent common mistakes; these prompts remind users to check their entries and follow booking rules. Each page is designed with an emphasis on minimizing error potential through prompts and reminders that reinforce accuracy (Nielsen & Norman, 2018). Dynamic slot availability was achieved through real-time data updates, ensuring students see only currently available slots. This responsive feedback loop allows students to make real-time decisions without needing to refresh or reload the page, supporting efficient topic allocation. The booked list page, accessible to all, displays all confirmed bookings with details organized for quick scanning. The list’s structured layout further enhances usability, aligning with principles of transparent data display and supporting students and staff in verifying booking records.

Development Process

The development of the project booking system utilized PHP for back-end programming due to its server-side scripting capabilities and compatibility with the institution’s web environment. PHP’s role in dynamic form processing and data validation was essential in enforcing the single-booking restriction and interacting seamlessly with the JSON database, which stored all booking data. JSON was chosen for its stability, speed, and reliable structure, critical for managing a small volume of data entries and rapid data retrieval. To maintain consistent timestamps for each booking, the system implemented GMT/UTC as the standard time format. However, to cater specifically to users in Malaysia, the PHP function date_default_timezone_set('Asia/Kuala_Lumpur') was applied to align with Malaysia Standard Time (MST), providing a consistent time reference across bookings and avoiding confusion that may arise from varying time zones (Jackson, 2022). The front end was designed with HTML, CSS, and JavaScript, technologies that collectively ensure a responsive and accessible interface compatible across devices. Error handling and validation checks were integrated using PHP’s form validation, displaying relevant feedback messages when incorrect or incomplete information is submitted. This validation process helps maintain data accuracy while guiding users through corrective actions as needed.

Implementation and Testing

In terms of implementation and testing, rigorous testing scenarios were conducted to verify that the system met all requirements and provided a seamless booking experience. Each test scenario confirmed that the single-booking rule was properly enforced; attempts to book multiple topics by a single user were consistently blocked, meeting the core requirement for fair access. The system’s real-time slot update was also tested under scenarios simulating concurrent bookings by multiple users, with the system proving highly responsive and maintaining accurate slot availability. Testing also validated that all entries in the publicly accessible booked list displayed correctly, showing the student’s name, date, and GMT/UTC time stamp. Additionally, instructions were evaluated for clarity, with each prompt and error message contributing to improved user guidance and reduced booking errors.

Usage Outcomes and Benefits

Since its implementation, the project booking web system has demonstrated substantial benefits in efficiency and user experience. By automating the project topic assignment process, the system has reduced the need for manual intervention, freeing up administrative time and resources. Students are able to book slots with ease, relying on real-time availability feedback to make informed choices, while administrators benefit from clear insights into booking trends and data. The transparent, publicly accessible booking list has enhanced accountability, enabling students to confirm their own bookings at any time. User satisfaction has increased as well, owing to the system’s intuitive interface and clear instructions. Error rates have significantly dropped, allowing students to reserve topics with greater confidence and efficiency (Smith & Brown, 2020).

Future Enhancements

Looking to the future, a few enhancements could further improve the system’s capabilities and user experience. One potential enhancement is an automated email confirmation feature, which would provide students with a tangible record of their booking and reinforce the accuracy of their submission. Another suggested feature is an admin dashboard, which would offer faculty greater control over slot management and allow for necessary adjustments in real-time. Additionally, integrating the system with student profiles could streamline the booking process further, reducing manual entry requirements and minimizing potential errors due to misspelled names.

Conclusion

Overall, the project booking web system exemplifies a well-organized, effective solution for managing academic project topics. By adhering to key principles of usability, transparency, and fairness, this system has streamlined the booking process, providing equitable access to topics and enhancing both student and administrator experiences. Potential future enhancements, such as email confirmation, an admin dashboard, and student profile integration, could further support the system’s goals of user-centered efficiency and functionality, ensuring it remains a valuable tool in academic project management.

References

  • Jackson, R. (2022). The Importance of User Experience in Online Academic Platforms. Journal of Educational Technology, 14(2), 45-60.
  • Jones, T. (2019). Principles of Fairness in Student Project Assignment Systems. Education and Management Studies, 11(3), 98-105.
  • Lee, S. (2021). Transparency and Trust in Online Academic Platforms. Journal of Higher Education IT, 6(1), 102-117.
  • Nielsen, J., & Norman, D. (2018). Usability in Web Design. Academic Press.
  • Singh, M. (2021). Database Design for Educational Management Systems. Computer Science Journal, 9(7), 110-125.
  • Smith, T., & Brown, A. (2020). User-Centered Design in Online Academic Tools. Journal of Educational Interface Design, 7(6), 90-109.

Understanding Miscommunication in Systems Development

Understanding Miscommunication in Systems Development

By Shahabuddin Amerudin

In the realm of systems analysis and design, miscommunication can significantly hinder project success. A recent cartoon humorously captures this reality by showcasing the differing perspectives of various stakeholders—including users, analysts, designers, and programmers—regarding the goals of a project. These differing views often lead to outcomes that diverge from the original intent, resulting in a product that may not fulfill user expectations.

The cartoon opens with the user’s request, depicted as a simple swing constructed with one rope. This image represents the user’s desire for a functional and minimalistic design. However, it highlights a common issue: users frequently believe they are expressing their needs clearly, yet their requests can lack the necessary detail for developers to understand their true intentions. This ambiguity sets the stage for potential misunderstandings later in the development process, emphasizing the need for precise communication from the outset.

As the cartoon progresses, the analyst interprets the request as a swing with two ropes. This visualization is closer to a conventional swing, but it still leaves room for interpretation. Analysts strive to convert user needs into detailed specifications, but when requirements are not explicit, they may introduce their own assumptions. This aspect of the cartoon underscores the critical importance of thorough requirements gathering and the necessity of confirming those details with users to ensure alignment between expectations and deliverables.

Next, the cartoon illustrates how the system is designed, showing an even more complex swing with additional ropes and a wider seat. While this design reflects a robust approach, it may also lead to over-engineering. Designers often incorporate extra features—such as safety enhancements or redundancies—that the user did not explicitly request. This tendency to enhance the system can complicate the project and increase costs without providing any real value from the user’s perspective. It serves as a cautionary reminder that simplicity should be prioritized whenever possible.

The cartoon further depicts the programmer’s interpretation of the specifications, which results in a swing with one rope anchored to the ground and another tied to a branch. This representation highlights the programmer’s creativity in addressing a poorly defined task, but it also points to a significant disconnect between the intended design and the final implementation. Such gaps in communication between designers and programmers can lead to products that look very different from what users had envisioned, emphasizing the need for ongoing dialogue throughout the development process.

Eventually, the cartoon presents what the user actually wanted: a straightforward swing with two ropes and a seat—simple, practical, and functional. This panel emphasizes the importance of clear communication and verification with users at every stage of the project. It serves as a poignant reminder that complex designs may be unnecessary when the user’s needs are fundamentally straightforward.

The final image in the cartoon depicts the dysfunctional end product: a swing with a bent frame. This outcome starkly illustrates the consequences of compounded miscommunication and errors at each phase of the project, resulting in a system that ultimately fails to meet the user’s requirements. It highlights the critical importance of thorough testing, quality assurance, and the need to revisit initial requirements throughout the development process to ensure alignment with user expectations.

In conclusion, this cartoon effectively illustrates the myriad challenges that can arise in systems development due to misunderstandings, assumption-based decisions, and a lack of iterative validation. Each phase introduces its own interpretation, layering complexity that can lead to a final product that does not meet the user’s actual needs. The humorous yet insightful portrayal serves as a reminder of the importance of active user involvement, meticulous attention to detail, and continuous feedback throughout the software development lifecycle. By prioritizing these elements, development teams can mitigate the risks of misalignment and improve the likelihood of delivering successful outcomes in systems analysis and design.

Recent Methods for Evaluating GNSS Receiver Accuracy and Reliability

https://eos-gnss.com/knowledge-base/gps-overview-1-what-is-gps-and-gnss-positioning

By Shahabuddin Amerudin

Global Navigation Satellite System (GNSS) receivers are vital in Geographic Information Systems (GIS), serving as the foundation for accurate spatial data collection. These systems are integral to a wide range of applications, including urban planning, precision agriculture, infrastructure development, and environmental monitoring, all of which demand high positional accuracy for reliable decision-making. Achieving sub-meter accuracy is essential, as even small positional errors can have significant implications, such as misalignment in land parcel delineation or imprecise application of resources in precision agriculture (Lachapelle & El-Rabbany, 2021). GNSS receivers, however, vary in performance due to factors like environmental conditions, satellite geometry, and receiver quality. This article explores the most recent methods employed to evaluate GNSS accuracy, with a focus on achieving sub-meter precision and reliability.

1. Root Mean Square Error (RMSE) Analysis

Root Mean Square Error (RMSE) is one of the most widely utilized metrics for assessing GNSS receiver accuracy. RMSE calculates the difference between GNSS-measured coordinates and reference coordinates, providing an overall measure of positional error. It has become a standard method for evaluating accuracy across diverse GNSS applications, including those requiring sub-meter precision.

The primary advantage of RMSE is that it offers a single-value summary of the average error, allowing for straightforward comparisons between different receivers or correction methods. For example, in precision agriculture or urban planning, using RMSE enables decision-makers to quantify how much the GNSS-based positional data deviates from known control points (Rizos & Wang, 2022). RMSE is calculated by comparing the deviations in the X, Y, and Z axes and is particularly useful when determining how well a receiver performs under various environmental conditions.

2. Circular Error Probable (CEP)

Circular Error Probable (CEP) is another widely used method for evaluating the accuracy of GNSS receivers, particularly in measuring horizontal accuracy. CEP defines a circle within which 50% of the GPS measurements are expected to fall, offering a simplified yet effective way to assess positional accuracy in two-dimensional space. It is especially useful in GIS applications that rely heavily on horizontal coordinates, such as mapping and navigation (Langley, 2023).

CEP is often applied in tandem with RMSE to provide a more nuanced understanding of GNSS accuracy. While RMSE evaluates overall error, CEP focuses specifically on horizontal accuracy, making it ideal for GIS users interested in the precision of latitude and longitude measurements (Misra & Enge, 2019). By analyzing the distribution of positional errors, CEP gives an intuitive measure of how spread out or clustered the data points are around the true position.

3. Horizontal and Vertical Dilution of Precision (HDOP/VDOP)

Dilution of Precision (DOP) is a critical factor in determining GNSS accuracy, with Horizontal DOP (HDOP) and Vertical DOP (VDOP) values indicating the quality of satellite geometry and its impact on positional accuracy. Low DOP values suggest better satellite configurations, which improve the reliability of positional data.

HDOP and VDOP are particularly useful for assessing how satellite geometry affects horizontal and vertical accuracy, respectively. Many GNSS receivers report HDOP and VDOP values alongside positional data, allowing users to evaluate the quality of the satellite constellation at the time of data collection (Groves, 2020). This makes DOP values essential for understanding how well GNSS receivers perform in varying environmental conditions, such as urban canyons or heavily forested areas, where satellite visibility may be obstructed (Lachapelle & El-Rabbany, 2021).

4. Standard Deviation of Coordinates

The standard deviation of coordinates provides insight into the consistency of GNSS receiver performance by measuring the variation of positional data around a mean value. It is particularly useful in detecting irregularities or errors caused by multipath effects or signal interference. This method allows researchers to evaluate the spread of GNSS measurements and identify outliers that may be affecting overall accuracy.

The standard deviation is calculated by averaging the collected coordinates and determining how much each data point deviates from this average. A low standard deviation indicates that the positional measurements are closely clustered around the mean, reflecting good consistency and reliability (Kaplan & Hegarty, 2017). This method is especially beneficial for applications where long-term consistency is more critical than instantaneous accuracy, such as in environmental monitoring or geodetic surveying (Misra & Enge, 2019).

5. Kinematic vs. Static Testing

In addition to static testing, where the GNSS receiver remains stationary at a known point, kinematic testing evaluates receiver performance during movement. Kinematic testing simulates real-world applications, such as vehicle tracking or navigation, where the receiver must maintain accuracy while in motion.

Kinematic testing provides valuable insights into how well a GNSS receiver performs under dynamic conditions, making it essential for assessing performance in navigation-based applications. In these tests, the receiver is moved along a predetermined path, and its recorded positions are compared to the known path using metrics like RMSE and CEP. This method is crucial for understanding how well a receiver can maintain accuracy while compensating for motion, an essential consideration in vehicle-based GIS applications (Li & Zhang, 2022).

6. Multi-Constellation GNSS Evaluation

Modern GNSS receivers have the ability to track multiple satellite constellations, such as GPS, GLONASS, Galileo, and BeiDou, which improves the accuracy and reliability of positional data. Evaluating performance across multiple constellations allows researchers to identify which satellite systems and combinations provide the best accuracy in various environments.

Multi-constellation tracking has become particularly important in environments where satellite visibility is limited, such as urban areas with tall buildings or dense forests. By using multiple constellations, GNSS receivers can compensate for the limitations of individual systems, leading to improved accuracy and reliability (Wubbena & Seeber, 2021). Performance is evaluated by comparing data collected from different constellations and analyzing the impact on positional accuracy using metrics such as RMSE and standard deviation (Hofmann-Wellenhof & Lichtenegger, 2020).

7. Positional Accuracy Improvement with Differential Correction

Differential correction techniques such as Real-Time Kinematic (RTK), Satellite-Based Augmentation Systems (SBAS), and Precise Point Positioning (PPP) are commonly used to improve GNSS accuracy. These methods provide correction data that compensates for satellite and atmospheric errors, significantly enhancing the precision of positional measurements.

RTK, for example, can achieve sub-centimeter accuracy, making it an invaluable tool for applications requiring high precision, such as cadastral mapping or infrastructure development. The effectiveness of differential correction is often assessed by comparing data collected with and without correction, with accuracy improvements quantified through RMSE and other metrics (Ge & Xie, 2023). These correction methods are crucial for ensuring reliable GNSS data in areas where uncorrected GNSS signals may be insufficient for sub-meter accuracy.

8. Geostatistical Analysis

Geostatistical methods, such as Kriging and Spatial Autocorrelation, are increasingly used to analyze the spatial distribution of GNSS errors. These techniques help identify areas where errors cluster and understand how environmental factors, such as building density or tree cover, influence GNSS accuracy.

By adding a spatial dimension to error analysis, geostatistical methods offer valuable insights into the environmental variables that affect GNSS performance. Kriging, for instance, can model the spatial distribution of errors, allowing researchers to predict where inaccuracies are likely to occur based on environmental conditions (Ge & Xie, 2023). This approach is particularly useful for urban planners and environmental scientists who need to account for spatial biases in their data.

9. Machine Learning-Based Accuracy Prediction

In recent years, machine learning techniques have emerged as a powerful tool for predicting GNSS accuracy based on environmental factors. Models such as decision trees, random forests, and neural networks use historical GNSS data and environmental conditions to predict likely accuracy levels before data collection occurs.

Machine learning models can analyze vast amounts of data to identify patterns and predict GNSS performance in challenging environments, such as areas with poor satellite visibility or extreme weather conditions (Kim & Park, 2022). This predictive capability enables GIS professionals to anticipate accuracy issues and adjust their data collection strategies accordingly, making machine learning an invaluable tool for improving GNSS reliability.

Conclusion

The evaluation of GNSS receiver accuracy is critical to ensuring the reliability of spatial data in GIS applications. Recent advancements in evaluation methods, such as RMSE, CEP, DOP analysis, and machine learning-based prediction, provide powerful tools for assessing and improving GNSS accuracy. These methods allow GIS professionals to make informed decisions about the reliability of their GNSS receivers, ensuring that spatial data collection workflows are optimized for accuracy and precision. The growing use of multi-constellation GNSS receivers and differential correction techniques further enhances the accuracy of positional data, making these methods indispensable for modern GIS applications.

References

Ge, M., & Xie, X. (2023). Geostatistical Approaches in GNSS Accuracy Analysis. GIScience Journal.

Groves, P. (2020). Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems.

Hofmann-Wellenhof, B., & Lichtenegger, H. (2020). GNSS: Global Navigation Satellite Systems – Applications and Challenges.

Kaplan, E. D., & Hegarty, C. (2017). Understanding GPS/GNSS: Principles and Applications.

Kim, Y. K., & Park, S. H. (2022). Machine Learning for GNSS Accuracy Prediction in Challenging Environments. Sensors.

Lachapelle, G., & El-Rabbany, A. (2021). Understanding GNSS Errors and Performance Metrics. GNSS Solutions.

Langley, R. B. (2023). Circular Error Probable in GNSS Accuracy Assessment. Navigation Journal.

Li, Y., & Zhang, L. (2022). Kinematic Testing for GNSS Receivers: A Review. International Journal of Navigation and Observation.

Misra, P., & Enge, P. (2019). Global Positioning System: Signals, Measurements, and Performance.

Rizos, C., & Wang, J. (2022). Evaluating GNSS Receiver Accuracy Using RMSE. Journal of Geodesy.

Wubbena, G., & Seeber, G. (2021). Multi-Constellation GNSS in Complex Environments. Journal of GNSS Engineering.

New Academic Session for Semester 1, 2024/2025

https://builtsurvey.utm.my/wp-content/uploads/2024/09/Ppt-Slide-Welcome-scaled-1.jpg

By Shahabuddin Amerudin 3 October 2024

UTM JOHOR BAHRU: As the new academic session kicks off on October 6, 2024, faculty members and staff at the Faculty of Built Environment and Surveying, Universiti Teknologi Malaysia (UTM) are preparing to welcome students and start the first semester of the 2024/2025 academic year with high spirits. As a leading educational institution in the field of Geoinformation, the Department of Geoinformation is dedicated to providing quality education and innovative research to meet the evolving needs of the global community.

A significant change taking place at UTM is the restructuring of the faculties, effective October 1, 2024. As part of this effort, the Geoinformation Program is now officially recognized as the Department of Geoinformation. This change also sees the title of Director replaced with Head of Department, reflecting a more specific role in managing this rapidly growing department. This rebranding opens opportunities for the Department of Geoinformation to continue strengthening its reputation nationally and internationally while ensuring the delivery of relevant and high-quality programs.

To enhance undergraduate education, the Department of Geoinformation has successfully attracted a significant number of new students. The registration of undergraduate students on September 28, 2024, saw 144 new students enrolling in two main programs. The Bachelor of Engineering (Honors) in Geomatics (SBEUH) welcomed 83 students, while the Bachelor of Science (Honors) in Geoinformation (SBEGH) recorded 61 students. Registration data for the Geomatics Program reveals that there are 13 students from matriculation, 1 from STPM (Malaysian Higher School Certificate), 4 from foundation studies, and 65 from diploma programs. In the Geoinformatics Program, the latest data shows there are 52 students from STPM, matriculation, and foundation studies, along with 10 diploma students. These numbers reflect a strong confidence in the quality of the academic programs, which effectively combine theoretical knowledge with practical skills necessary for the dynamic industry.

At the postgraduate level, the Department of Geoinformation is also witnessing an increase in student enrollment. The registration for postgraduate students, which commenced on October 2, 2024, is still ongoing, and so far, 12 students have registered for PhD programs in Geomatics, Geoinformatics, and Remote Sensing. The Master of Philosophy program has attracted 8 students, while an additional 6 students have enrolled in the Master’s by Course Work. The department hopes to enhance international marketing efforts to attract more postgraduate students from around the globe, especially as the field of Geoinformation plays an increasingly vital role in addressing global issues such as climate change, disaster management, and smart city development.

The Department of Geoinformation offers accredited academic programs designed to meet industry demands, equipping students with various skills in geospatial data acquisition and collection technologies, geospatial data processing and analysis, as well as the development of critical Geographic Information Systems (GIS) applications for decision-making. The uniqueness of these programs lies in their research-based learning approach and close collaboration with both public and private sectors, allowing students to gain valuable and competitive industry experience.

Through ongoing efforts to enhance teaching and research quality, the Department of Geoinformation frequently invites the international community to participate in its programs, which have proven capable of producing outstanding and competent graduates in the geospatial field. For students from diverse educational backgrounds, the Department of Geoinformation provides opportunities to enhance their knowledge, whether through undergraduate admissions from matriculation, STPM, foundation studies, or diploma programs, or through postgraduate pathways for further studies.

Faculty members and staff welcome students and researchers from around the world to join them in expanding new horizons in the field of Geoinformation. The Department of Geoinformation at UTM remains committed to being a leader in impactful research and providing a holistic educational platform that aligns with the ever-pressing global needs.

Sesi Perkuliahan Baru Semester 1 Sesi 2024/2025

https://builtsurvey.utm.my/wp-content/uploads/2024/09/Ppt-Slide-Welcome-scaled-1.jpg

Oleh Shahabuddin Amerudin 3 Oktober 2024

UTM JOHOR BAHRU: Dengan tibanya sesi perkuliahan baru pada 6 Oktober 2024, pensyarah dan kakitangan di Fakulti Alam Bina dan Ukur, Universiti Teknologi Malaysia (UTM) bersiap sedia untuk menyambut pelajar serta memulakan semester 1 sesi 2024/2025 dengan semangat yang tinggi. Sebagai pusat pendidikan terkemuka dalam bidang Geoinformasi, Jabatan Geoinformasi sentiasa berusaha untuk menawarkan pendidikan berkualiti dan penyelidikan inovatif bagi memenuhi keperluan komuniti global yang semakin berkembang.

Satu perubahan signifikan yang berlaku di UTM adalah penstrukturan semula fakulti-fakulti yang berkuat kuasa pada 1 Oktober 2024. Dalam usaha ini, Program Geoinformasi kini dikenali sebagai Jabatan Geoinformasi. Dengan perubahan ini, jawatan Pengarah telah ditukar kepada Ketua Jabatan, menggambarkan peranan yang lebih spesifik dalam menguruskan jabatan yang berkembang pesat ini. Penjenamaan semula ini membuka ruang kepada Jabatan Geoinformasi untuk terus mengukuhkan reputasinya di peringkat nasional dan antarabangsa, sambil memastikan penawaran program yang relevan dan berkualiti.

Dalam usaha memperkasa pendidikan prasiswazah, Jabatan Geoinformasi telah berjaya menarik sejumlah besar pelajar baru. Pendaftaran pelajar prasiswazah yang berlangsung pada 28 September 2024 menyaksikan 144 orang pelajar baharu telah mendaftar bagi dua program utama. Program Sarjana Muda Kejuruteraan Geomatik dengan Kepujian (SBEUH) telah menerima 83 orang pelajar, manakala Sarjana Muda Sains Geoinformasi dengan Kepujian (SBEGH) pula mencatat 61 orang pelajar. Data pendaftaran pelajar bagi Program Kejuruteraan Geomatik menunjukkan bahawa terdapat 13 pelajar dari matrikulasi, 1 pelajar dari STPM, 4 pelajar dari asasi, dan 65 pelajar dari diploma. Bagi Program Geoinformatik, data terkini menunjukkan terdapat 52 pelajar dari STPM, matrikulasi, dan asasi, serta 10 pelajar dari diploma. Angka ini menunjukkan keyakinan yang tinggi terhadap kualiti program akademik yang telah diiktiraf kerana menggabungkan pengetahuan teori dengan kemahiran praktikal yang diperlukan dalam industri yang dinamik.

Di peringkat pascasiswazah, Jabatan Geoinformasi juga sedang menyaksikan peningkatan bilangan pelajar. Pendaftaran pelajar pascasiswazah yang bermula pada 2 Oktober 2024 masih lagi berjalan, dan setakat ini, seramai 12 pelajar telah mendaftar untuk program PhD dalam bidang Geomatik, Geoinformatik, dan Remote Sensing. Program Sarjana Falsafah telah menarik 8 pelajar, manakala 6 pelajar lagi telah mendaftar bagi Sarjana Kerja Kursus. Jabatan berharap dapat meningkatkan usaha pemasaran di peringkat antarabangsa bagi menarik lebih ramai pelajar pascasiswazah dari seluruh dunia, memandangkan bidang Geoinformasi mempunyai peranan yang semakin penting dalam menangani isu global seperti perubahan iklim, pengurusan bencana, dan pembangunan bandar pintar.

Jabatan Geoinformasi menawarkan program-program akademik yang diiktiraf dan direka untuk memenuhi kehendak industri, di samping mempersiapkan pelajar dengan pelbagai kemahiran menggunakan teknologi perolehan dan pengumpulan data geospatial, pemprosesan dan penganalisaan data geospatial, serta pembangunan aplikasi Geographic Information Systems (GIS) yang kritikal dalam membuat keputusan. Keunikan program ini terletak pada pendekatan pembelajaran berasaskan penyelidikan dan kerjasama erat dengan sektor awam dan swasta, membolehkan pelajar memperoleh pengalaman industri yang bernilai dan kompetitif.

Melalui usaha berterusan untuk meningkatkan kualiti pengajaran dan penyelidikan, Jabatan Geoinformasi sering menjemput komuniti antarabangsa untuk menyertai program-program yang dijalankan, yang telah terbukti berupaya menghasilkan graduan yang cemerlang dan kompeten dalam bidang geospatial. Bagi pelajar yang mempunyai latar belakang pendidikan yang pelbagai, Jabatan Geoinformasi menawarkan tempat untuk memperkasa ilmu mereka, sama ada melalui kemasukan prasiswazah dari matrikulasi, STPM, asasi, atau diploma, atau melalui laluan pascasiswazah untuk pengajian lanjut.

Pensyarah dan kakitangan mengalu-alukan kedatangan pelajar dan penyelidik dari seluruh dunia untuk bersama-sama dalam mengembangkan horizon baru dalam bidang Geoinformasi. Jabatan Geoinformasi UTM terus komited untuk menjadi peneraju dalam penyelidikan berimpak tinggi dan menyediakan platform pendidikan yang holistik, sejajar dengan keperluan global yang semakin mendesak.

The Role of Generative AI in Transforming Programming Practices

GenAI

Generative AI (GenAI) has emerged as a transformative tool in the field of software development, extending its capabilities beyond text generation to the creation of computer code. This advancement aligns with the understanding that computer code is essentially another form of language, making it possible for AI models to aid developers in their work. GenAI can accelerate various programming tasks, thereby enhancing the efficiency of software development. Its ability to convert natural language instructions into executable code and provide real-time code suggestions has the potential to reshape the role of developers in the industry. However, the question remains: how effective is GenAI in producing quality code? According to a study conducted by Alphabet’s DeepMind, their AlphaCode model performed on par with novice coders who had about six months to a year of training (Metz, 2022). This marks a significant milestone for AI, and as the technology continues to improve, it is expected that these models will soon match the capabilities of more seasoned programmers.

One of the most promising aspects of GenAI is its accessibility. Even individuals with minimal coding experience can use GenAI to write functional code, democratizing the process of software development. This makes GenAI particularly useful for non-programmers who need to build applications but lack the necessary technical expertise. The model’s ability to translate plain language into programming code lowers the barriers to entry for software creation. Furthermore, GenAI can assist in critical development tasks such as gathering software requirements, reviewing code for inconsistencies, and even fixing bugs. For example, during the requirement-gathering phase, GenAI can generate a comprehensive list of functional needs based on user inputs, ensuring that no key elements—like security—are overlooked (Brown, 2023). Additionally, GenAI’s real-time code completion capabilities help developers by suggesting code snippets as they type, significantly speeding up the process and minimizing human errors.

GenAI also contributes to the testing and maintenance of software by automating several phases of the software development lifecycle. It can review existing code, propose optimizations, and generate test cases to ensure the code meets performance and security standards. This predictive capability is already being explored by companies like Dynatrace, which aims to use AI to anticipate system failures before the code goes into production. In a recent interview, Dynatrace’s Chief Technology Officer, Bernd Greifeneder, highlighted that their AI model is designed to predict potential system failures, enabling developers to fix problems before they cause issues in real-time applications (ZDNet, 2023). This “predictive AI” concept, if fully realized, could represent a paradigm shift in software development, where preventing faults becomes the norm rather than reacting to them post-launch.

Despite its many advantages, the integration of GenAI into programming is not without challenges. Issues such as AI hallucinations, where the model generates plausible but incorrect code, as well as concerns over data security and intellectual property, must be addressed. There is a risk that proprietary code may be inadvertently used to train AI models, exposing sensitive information to external parties. Therefore, strong safeguards and human oversight are essential to mitigate these risks (Kaur & Singh, 2023). Additionally, while GenAI can automate many tasks, it is unlikely to replace software developers entirely. Instead, the role of developers is expected to evolve, with AI serving as a co-pilot that supports, rather than supplants, human expertise.

The future of programming will likely involve a closer collaboration between developers and AI, much like how other professionals such as journalists and doctors are increasingly working alongside AI tools. High-level developers, whose responsibilities often extend beyond just coding, will benefit from GenAI’s ability to handle repetitive tasks, allowing them to focus on more complex problem-solving activities. In fact, studies have shown that developers spend only around 20% of their time writing code, with the remaining time dedicated to tasks like project management, requirement gathering, and testing (Williams, 2023). GenAI’s capacity to generate, review, and test code ensures that the time developers spend coding is more productive, and it reduces the burden of mundane tasks such as internal documentation.

Moreover, GenAI’s impact extends beyond professional developers. Everyday users can now leverage these tools to create software without any prior knowledge of programming languages. This accessibility could lead to an increase in innovation, as individuals outside the tech industry can use AI to develop apps or services tailored to their needs. However, while GenAI tools are highly capable, they are not infallible. Instances of overconfidence in incorrect code outputs demonstrate that AI should be used as a supplement rather than a replacement for human judgment in software development (Park, 2022).

GenAI represents a major shift in how software development is approached. By automating repetitive coding tasks and improving efficiency, GenAI serves as a valuable tool that enhances the productivity of developers and makes programming more accessible to non-experts. However, as with any emerging technology, ethical and practical challenges remain, necessitating human oversight to ensure that the benefits of GenAI are fully realized. As the technology continues to evolve, it is poised to play an increasingly important role in the future of software development.


References

  • Brown, J. (2023). The evolving role of AI in software engineering. IEEE Software, 40(2), 15-21.
  • Kaur, A., & Singh, P. (2023). Challenges and opportunities in AI-driven software development. ACM Computing Surveys, 55(6), 1-27.
  • Metz, C. (2022). AlphaCode’s impact on novice programmers. The New York Times. Retrieved from https://www.nytimes.com
  • Park, H. (2022). AI hallucinations in coding: Risks and solutions. Journal of Artificial Intelligence Research, 18(4), 53-70.
  • Williams, M. (2023). The changing landscape of software development. ACM Queue, 21(3), 24-31.

Analysis Phase of a Web and Mobile Integrated Mapping System

Analysis Phase of a Web and Mobile Integrated Mapping System: Tools, Diagrams, and Suitable Models

By Shahabuddin Amerudin

The analysis phase of a web and mobile integrated mapping system is a vital part of the system development process, where all requirements are gathered, assessed, and organized to ensure a smooth and effective design and implementation of the system. This phase involves understanding the functional and non-functional requirements, identifying the key features needed by end-users, and setting up technical specifications that the system must adhere to. A variety of tools and methodologies are employed during this phase to collect, analyze, and document user requirements, and to visualize the system’s structure through diagrams and models.

Several tools are essential during the analysis phase to manage requirements and collaborate efficiently. Jira or Trelloare often used as project management tools to document and organize user stories, tasks, and system requirements. These tools enable the development team to track progress, prioritize features, and ensure that all stakeholders are aligned with the project’s goals. For documentation, tools like Microsoft Word or Google Docs are used to create a detailed System Requirement Specification (SRS) document, which outlines the functional and non-functional requirements, system constraints, and user expectations. The SRS acts as a blueprint that guides the subsequent design and development phases. To gather feedback from a broad range of users, tools like SurveyMonkey or Google Formscan be deployed. These tools help collect input from field officers, environmental researchers, and administrators regarding the features they need, such as real-time GPS tracking or layered map visualizations. This feedback is crucial in shaping the final system.

For collaboration and communication, tools like ZoomMicrosoft Teams, or Slack are essential. These platforms allow real-time interaction between the stakeholders and the development team, ensuring that any ambiguities or issues with the system requirements are clarified immediately. In addition to verbal communication, visual collaboration tools like Lucidchart or Microsoft Visio are used to create diagrams that visually represent the system’s architecture and interactions. These tools help stakeholders and developers conceptualize the system structure, ensuring a common understanding across the team.

The analysis phase also involves the creation of various diagrams and models that help visualize the system’s behavior and data flow. Use Case Diagrams are particularly important as they provide a high-level view of the system by identifying the key interactions between users and the system. For example, in a web and mobile integrated mapping system, different actors, such as field officers, administrators, and environmental researchers, interact with the system to upload geospatial data, view environmental data layers, or generate reports. Tools like Lucidchart or Draw.io can be used to create these diagrams, offering a clear representation of the user’s interactions with system functions.

Another critical diagram is the Data Flow Diagram (DFD), which models how data moves through the system. In a mapping system, a DFD might illustrate how data is collected via mobile devices in the field, transmitted to the backend server, processed in a geospatial database, and then displayed on the web interface. DFDs are essential in understanding how data flows across various system components, ensuring that data from multiple sources—such as sensors or manually uploaded environmental data—is processed smoothly and efficiently. Tools like Draw.io or Lucidchart can be used to create these diagrams.

For database designEntity-Relationship Diagrams (ERD) are vital. ERDs model the relationships between different data entities in the geospatial database. For instance, entities such as “Location,” “Environmental Data,” and “User” will be represented as entities, and their relationships will define how data is connected. This helps in structuring the geospatial database to manage vast amounts of data efficiently. Tools such as MySQL Workbench or Visual Paradigm can be employed to create these ERDs, ensuring that the data relationships are well understood and that the database will support the system’s needs effectively.

User Journey Maps are also valuable in the analysis phase, as they depict the entire process a user goes through when interacting with the system. For instance, the journey map might illustrate the steps a field officer takes to collect environmental data using a mobile device, upload the data to the system, and visualize it on a web platform. Creating these journey maps using tools like Miro or UXPressia helps in identifying potential pain points and areas where the system can be improved to optimize the user experience.

Finally, Context Diagrams are created to provide an overview of the system’s boundaries and its interactions with external systems. In the context of the mapping system, a context diagram might show how the system interacts with external GPS services, third-party environmental databases, or cloud storage solutions. Tools like Visio or Lucidchartcan be used to create context diagrams, providing a simplified yet comprehensive view of how the system integrates with external components.

When it comes to choosing an appropriate model for this type of system, the Unified Modeling Language (UML) is often used, as it is a standardized modeling language that provides a visual representation of the system’s functionality. UML diagrams, such as Class DiagramsUse Case Diagrams, and Sequence Diagrams, allow developers to map out the system’s architecture, object relationships, and sequence of interactions, ensuring that all components are well-structured. In terms of development models, both the Waterfall and Agile models can be suitable, depending on the project’s scope. If the requirements are clear and unlikely to change, the Waterfall Model—with its linear approach—works well as it ensures that each phase is completed before moving to the next. However, for projects where requirements might evolve, an Agile Model is preferable due to its iterative approach, allowing for continuous feedback and adjustment.

The Prototyping Model is also highly effective in the analysis phase of system development. By developing early versions or mock-ups of key system features, such as the map interface or data upload functionality, stakeholders can provide feedback on the look and feel of the system. This allows developers to make early adjustments based on user input, reducing the risk of major revisions later in the project.

In conclusion, the analysis phase of the web and mobile integrated mapping system is a complex process that involves several tools, diagrams, and models to ensure that the system is designed according to the user’s needs and technical specifications. Tools like Jira, Lucidchart, and Google Forms help organize and document requirements, while diagrams such as Use Case Diagrams, Data Flow Diagrams, and ERDs provide essential visualizations of the system’s architecture and data flow. The models used during this phase, whether UML, Waterfall, Agile, or Prototyping, are chosen based on the project’s scope and adaptability, ensuring that the system is robust, scalable, and aligned with user expectations.

System Analysis and Design: Development of a Web and Mobile Integrated Mapping System for Environmental Monitoring

Development of a Web and Mobile Integrated Mapping System for Environmental Monitoring

By Shahabuddin Amerudin

In recent years, environmental monitoring has become increasingly crucial for conservation efforts, leading to the development of innovative systems that leverage web and mobile technologies. One such system is the Web and Mobile Integrated Mapping System, which was developed to track and analyze environmental hotspots, including forest areas, mangrove plantations, and wildlife habitats. This system provides real-time data collection and visualization capabilities through a web interface and a mobile application, allowing users to access and contribute data in a seamless and efficient manner. The system was developed over a period of six months, using an Agile methodology, and involved a multidisciplinary team that included developers, GIS specialists, environmental scientists, and testers.

Timeframe and Team Structure

The project followed a six-month development timeline. During the first two months, the team focused on project planning, gathering requirements, and designing the system architecture. By the third and fourth months, the system’s frontend and backend were developed, with database setup and initial integration efforts. In the fifth month, the web and mobile platforms were integrated, tested, and deployed. Finally, in the sixth month, the system underwent user testing, feedback collection, and final adjustments before being fully implemented. The project involved a diverse team: a Project Manager coordinated activities across different teams, ensuring deadlines were met and project goals achieved; a System Analyst gathered requirements and defined the system architecture; Frontend and Backend Developers built the user interface and server-side functionality; GIS experts contributed geospatial knowledge and data integration; Environmental Scientists provided the domain expertise required to define the environmental monitoring parameters; a UX/UI Designer ensured the interface was user-friendly; and a QA Team conducted extensive testing to guarantee that the system was robust and reliable.

Tools and Methodologies Used

The system was developed using a variety of methodologies and tools that ensured it met functional, technical, and user requirements. The Agile Scrum methodology was employed to allow for iterative development, rapid feedback, and continuous improvement. The team used Jira for project management and task tracking, while Slack and Trellofacilitated communication and sprint planning. These tools allowed for clear documentation of progress, effective communication between the teams, and the ability to adapt to any emerging challenges.

During the requirement analysis phase, tools like Lucidchart were used to design system architecture and workflows, and Google Docs was used for requirement documentation. Interviews with stakeholders and users helped to define the necessary system features, such as real-time data visualization, GPS-enabled field data collection, and multi-layer map interfaces. Based on these findings, a comprehensive technical specification was prepared.

The system design phase involved the use of Object-Oriented Design (OOD) and Service-Oriented Architecture (SOA) principles. This modular approach allowed for the integration of multiple components, making the system highly scalable and adaptable. Figma was used to design the user interface for both the web and mobile platforms, ensuring consistency in user experience. MySQL Workbench was employed to design the database schema, which stored both geospatial and non-spatial data, ensuring data integrity and accessibility.

Frontend and Backend Development

Frontend development for the web platform was handled using React.js, a powerful JavaScript framework known for its flexibility and speed in creating dynamic web applications. The interactive map functionality was built using Leaflet.js, an open-source library that allowed for easy integration of map layers, markers, and geospatial data visualization. For advanced data visualization, D3.js was employed to generate charts and graphs that depicted trends in environmental data, such as pollution levels or habitat changes. On the mobile side, Flutter was used, enabling the development of a single codebase that supported both Android and iOS devices. The mobile app integrated Google Maps API for geolocation services, ensuring that users could upload data, view environmental hotspots, and navigate to areas of interest directly from their smartphones.

On the backend, Node.js and Express.js were used to develop the server-side architecture, providing APIs to handle communication between the frontend and the database. PostGIS, a geospatial extension for PostgreSQL, was employed for efficient storage and querying of spatial data, allowing for the manipulation of geographical information such as coordinates, boundaries, and layers. For the mobile app, Firebase was chosen to handle user authentication and real-time database functionality, which allowed for seamless data syncing between field agents using the mobile app and the central database.

Testing and Implementation

The system underwent rigorous testing to ensure that it met the required performance, reliability, and scalability standards. Automated testing for the web application was carried out using Selenium, while Postman was used to test the RESTful APIs developed with Node.js, ensuring they could handle data requests from the frontend effectively. On the mobile side, Flutter Test was used to perform unit and integration testing, verifying the functionality of the app on both Android and iOS platforms. Testing ensured that the system performed well under high traffic loads and when large datasets were processed, particularly in scenarios involving real-time data uploads from remote locations.

The deployment phase involved the use of Docker for containerizing the application, allowing for consistent deployment across different environments. The system was hosted on Amazon Web Services (AWS), which provided scalable cloud infrastructure to accommodate varying user loads and ensured high availability. Nginx was used as a web server and reverse proxy to handle incoming requests and distribute traffic efficiently. The system was monitored post-launch using AWS CloudWatch, which tracked performance metrics, while GitHub and Jenkins were used for continuous integration and deployment (CI/CD), automating the process of testing and deploying updates to the system.

Maintenance and Updates

Once the system was fully implemented, it entered the maintenance phase, where regular updates were made to fix bugs and improve functionality. AWS CloudWatch continued to provide real-time monitoring, alerting the development team to any potential issues such as server overloads or slow response times. The system’s version control was managed using GitHub, which also allowed for bug tracking and collaborative development for future updates. Continuous integration practices were maintained with Jenkins, ensuring that new features could be rolled out quickly without disrupting the system’s operations.

Conclusion

The Web and Mobile Integrated Mapping System for environmental monitoring represents a comprehensive solution that leverages modern web and mobile technologies to provide a robust platform for tracking and visualizing environmental data. The use of advanced tools such as React.jsFlutterLeaflet.js, and PostGIS, combined with a well-structured Agile development process, ensured that the system was built efficiently within the allotted six-month timeframe. The involvement of multiple teams, including developers, GIS specialists, and environmental experts, ensured that the system was both technically sound and aligned with the practical needs of its end-users. This project highlights how the integration of web and mobile technologies can be applied to solve real-world problems in environmental conservation and monitoring.

System Analysis and Design: The Development of a Smart Healthcare System

System Analysis and Design: The Development of a Smart Healthcare System

One of the latest innovations in computer systems is the development of Smart Healthcare Systems, which integrate advanced technologies such as Artificial Intelligence (AI), the Internet of Things (IoT), and cloud computing. These systems provide real-time health monitoring, diagnostics, and predictive healthcare services, with the goal of transforming the healthcare industry. By using AI-powered analytics and IoT devices, healthcare providers can monitor patient data in real-time, detect abnormalities, and provide timely interventions. In this case, we examine the development process of a Smart Healthcare Monitoring System, which utilizes wearable devices and AI models to offer continuous monitoring and predictive health diagnostics.

The system development followed a structured approach based on System Analysis and Design methodologies, starting with the Project Planning and Management phase. An Agile methodology was adopted to allow for iterative development, enabling the system to be built in sprints. Agile project management tools such as Jira and Trello were employed to manage tasks and track progress. This approach allowed the development team to respond quickly to changing requirements and feedback from healthcare stakeholders, ensuring that the system was aligned with the practical needs of healthcare professionals.

Next, the Requirement Analysis phase involved gathering detailed information from end-users, including doctors, nurses, patients, and hospital administrators. The development team conducted interviews and distributed surveys using tools like Microsoft Teams for remote interviews and Google Forms for surveys. This data was essential in understanding the key functionalities the system needed, such as real-time patient monitoring and predictive health alerts. Based on these insights, the team was able to compile a list of system requirements, which formed the foundation for the subsequent design and development stages.

In the System Specification phase, the team created detailed documentation outlining both functional and non-functional requirements. Functional requirements included the system’s ability to monitor patient data from wearable devices, provide real-time alerts for abnormal health conditions, and integrate seamlessly with existing electronic health records (EHR). Non-functional requirements such as scalability, performance, and security were also considered. Unified Modeling Language (UML) diagrams, created using tools like Lucidchart and Visual Paradigm, were used to illustrate system components and interactions, while Microsoft Word was employed to draft the full requirement specification documentation.

The System Design phase was crucial in defining how the system would be built. The development team applied Object-Oriented Design (OOD) principles to ensure the system was modular, maintainable, and scalable. They chose the Model-View-Controller (MVC) architectural pattern to separate concerns, which improved the organization of the codebase. The user interface (UI) was designed using Adobe XD, focusing on creating an intuitive dashboard for healthcare providers and a user-friendly mobile application for patients. For database design, MySQL Workbench was used to define the structure of the relational database, which would store patient health records and diagnostic information.

During the System Development phase, a full-stack development approach was adopted. The frontend was built using React.js for the web interface, while Flutter was chosen for mobile application development, allowing the system to support multiple platforms. On the backend, a microservices architecture was implemented using Node.js to handle API requests and Flask for deploying AI models that performed diagnostic tasks. The system integrated IoT devices such as wearable heart rate and blood pressure monitors, which were developed using Arduino and Raspberry Pi. Data from these devices was processed and stored in both MySQL (for structured data) and MongoDB (for semi-structured IoT data). The AI models were developed using TensorFlow for deep learning and scikit-learn for machine learning algorithms, enabling the system to predict potential health issues based on real-time data.

After development, the System Testing phase began to ensure the system met all functional and non-functional requirements. A combination of automated testing and manual testing was performed using tools such as Selenium for user interface testing, Postman for API testing, and JUnit for unit testing of backend components. User Acceptance Testing (UAT) was conducted with healthcare professionals, who validated that the system met clinical standards and user expectations.

In the System Implementation phase, the system was deployed to a cloud environment using Amazon Web Services (AWS) for scalability and high availability. Docker was used to containerize different components of the system, ensuring consistent deployment across different environments. Continuous Integration/Continuous Deployment (CI/CD) pipelines were set up using Jenkins, automating the deployment process and allowing the team to rapidly release updates and new features based on feedback and bug reports.

Finally, in the System Maintenance phase, the team set up monitoring using AWS CloudWatch to track system performance metrics such as server load, response times, and security logs. Regular updates were managed via GitHub for version control, and the CI/CD pipeline was used to deploy updates and patches. The system was designed to be adaptable, allowing for continuous improvement as new healthcare requirements emerged and the system evolved to meet future demands.

In conclusion, the development of the Smart Healthcare System followed a comprehensive and structured approach based on established System Analysis and Design methodologies. From initial requirement gathering to deployment and maintenance, each phase was meticulously planned and executed, ensuring that the final system was both functionally robust and capable of evolving with the changing landscape of healthcare technology. Through the use of cutting-edge tools like React.js, TensorFlow, and AWS, the development team was able to deliver a powerful system that improves patient care while optimizing healthcare workflows.

System Analysis and Design: A Comprehensive Guide

system analysis and design

By Shahabuddin Amerudin

System Analysis and Design is a foundational process in the development of software and information systems. It involves a series of structured activities aimed at understanding the specific needs of users and designing a system that meets these needs through the application of technology. This process is essential to ensure that the final system operates effectively, meets all user requirements, and is maintainable in the long term.

The Importance of System Analysis and Design

In modern organizations, information systems play a vital role in facilitating decision-making, improving operational efficiency, and providing competitive advantages. Whether the system is designed for automating business processes, enhancing data management, or improving customer interaction, the proper analysis and design of that system determine its success. A poorly analyzed and designed system can lead to inefficiencies, increased costs, and user dissatisfaction.

The primary goal of system analysis and design is to create a well-structured solution that fulfills specific user needs while balancing technical, financial, and time constraints. It provides a roadmap that guides developers, managers, and stakeholders through a clear process from the initial concept to the implementation and maintenance of the system.

Key Concepts in System Analysis and Design

The process of system analysis and design can be broken down into several key stages. Each of these stages represents a distinct phase that contributes to the successful development of a system:

1. Project Planning and Management

Project Planning and Management is the initial stage in any system development project. It involves defining the scope, objectives, and deliverables of the project, along with estimating timeframes and budgets. Planning ensures that the project is feasible, and management involves tracking progress to ensure the project remains on schedule and within budget.

Key activities in project planning include:

  • Scope Definition: Clearly outlining what the system will and will not include.
  • Resource Allocation: Assigning team members, technical tools, and other resources required for the project.
  • Risk Management: Identifying potential risks and developing mitigation strategies.
  • Timeline Creation: Establishing deadlines and milestones to ensure that the project progresses as planned.

Effective project management is crucial to ensuring that system development is efficient and meets user expectations. Without a structured plan and continuous monitoring, projects can suffer from scope creep, budget overruns, and delayed timelines.

2. Requirement Analysis

Requirement Analysis is one of the most critical phases of system development. This phase involves gathering information from various stakeholders, including end users, managers, and technical staff, to understand the exact needs and issues that the new system must address.

The main activities in requirement analysis include:

  • Data Collection: Using interviews, surveys, and observation techniques to gather input from users.
  • Problem Identification: Identifying pain points, inefficiencies, and challenges that users face with current systems or processes.
  • Functional Requirement Documentation: Defining the specific functionalities the system must provide, such as data processing capabilities, user interfaces, and reporting features.
  • Non-Functional Requirements: Capturing requirements related to system performance, security, usability, and scalability.

Accurate and comprehensive requirement analysis ensures that the system meets the needs of its users and addresses all relevant issues. Poor requirement analysis can lead to the development of systems that fail to meet expectations, resulting in costly rework or even project failure.

3. System Specification

Following requirement analysis, the System Specification phase translates user requirements into a detailed technical blueprint. System specifications serve as a guide for developers during the implementation phase.

A typical system specification document will include:

  • Functional Specifications: A detailed breakdown of all the functions the system must perform, organized by priority and user interaction.
  • Data Requirements: Specifications for the data structures, databases, and data flow that will support the system.
  • User Interface Design: A description of how users will interact with the system, including screen layouts, navigation, and user experience considerations.
  • Technical Specifications: Defining the technologies, programming languages, frameworks, and hardware that will be used to develop and run the system.

Clear and detailed system specifications are essential for ensuring that the development team understands exactly what needs to be built and that the final system aligns with user expectations.

4. System Design

System Design takes the specifications and turns them into a workable design plan. This phase includes creating the overall architecture of the system, designing databases, defining workflows, and creating the user interface.

There are two key components of system design:

  • Logical Design: This focuses on what the system will do, including data flow diagrams, entity-relationship models, and process models that outline how the system will process data.
  • Physical Design: This focuses on how the system will be built, specifying the hardware, software, network configurations, and physical architecture that will support the system.

During the design phase, it’s also crucial to consider non-functional requirements, such as system security, performance, scalability, and maintainability. The design phase ensures that the system is well-structured, efficient, and meets both functional and technical requirements.

5. System Development

In the System Development phase, developers begin to build the system based on the design specifications. This includes writing the code for the application, developing databases, and integrating various system components.

This phase may be broken down further:

  • Coding and Programming: The development team writes code to implement the functionalities outlined in the design. This may involve using multiple programming languages, frameworks, and tools.
  • Database Development: The system’s databases are created based on the data models and structures defined during the design phase.
  • Integration: Different components of the system, such as the database, user interface, and processing logic, are integrated to work together.

Collaboration between developers, testers, and designers is essential to ensure that the system is built correctly and meets design specifications.

6. System Testing

System Testing is conducted to verify that the system functions as intended and meets the required standards for performance, security, and usability. Testing is performed at several levels:

  • Unit Testing: Individual components or units of the system are tested in isolation to ensure they work as expected.
  • Integration Testing: Ensures that different modules or components work together seamlessly.
  • System Testing: Tests the entire system as a whole to verify that it meets functional and non-functional requirements.
  • User Acceptance Testing (UAT): Involves end users testing the system in a real-world environment to ensure that it meets their needs.

Testing is crucial for identifying and resolving any bugs, security vulnerabilities, or performance issues before the system is deployed.

7. System Implementation

After the system has been thoroughly tested, it moves into the System Implementation phase. This is where the system is deployed in a real-world environment. Key activities in this phase include:

  • Installation: Installing the system on servers or user devices.
  • User Training: Providing training for users to ensure they are familiar with the system’s features and can use it effectively.
  • Data Migration: If the system replaces an older system, data from the old system may need to be migrated to the new one.

Successful implementation ensures that the system is fully operational and that users are ready to transition to the new platform.

8. System Maintenance

Once the system is in use, it enters the System Maintenance phase. Maintenance ensures that the system continues to operate effectively over time and adapts to changing user needs and environmental factors.

Key activities in system maintenance include:

  • Corrective Maintenance: Fixing bugs or issues that arise during the system’s operation.
  • Adaptive Maintenance: Making changes to the system to adapt to new requirements, such as updates in technology or business processes.
  • Preventive Maintenance: Regularly updating and optimizing the system to prevent future issues.

Ongoing maintenance is essential to ensure the system remains efficient, secure, and aligned with user needs.

Conclusion

System Analysis and Design is a critical process in the development of successful software and information systems. By following a structured approach, system developers can ensure that the final product meets user needs, is technically sound, and can adapt to future changes. Each phase—from planning and requirement analysis to design, development, and maintenance—plays an essential role in ensuring the success of the project. Through careful analysis and thoughtful design, systems can provide long-term value and efficiency to organizations, improving their overall productivity and effectiveness.

System Analysis and Design: An Overview

system analysis and design

By Shahabuddin Amerudin

System Analysis and Design refers to the structured process of understanding the requirements of a system and planning as well as creating technological solutions to meet these needs. In the context of software or information system development, this discipline involves several key stages, including requirement analysis, system design, testing, and implementation.

The following are key concepts within System Analysis and Design:

  1. Project Planning and Management
    Project planning and management encompasses all activities necessary to ensure that a system development project is completed on time, within budget, and achieves its desired objectives. This involves setting clear project goals, timelines, and resource allocations, and managing risks and stakeholder expectations throughout the project’s lifecycle.
  2. Requirement Analysis
    This phase involves gathering information from end users to understand their needs and the challenges they face. The goal of requirement analysis is to produce a detailed list of system features or functions that the new system must provide. This phase is crucial in ensuring that the system developed will address user needs effectively.
  3. System Specification
    Based on the findings from the requirement analysis, technical system specifications are created. This includes defining the system’s functionalities, user interfaces, and data structures. System specifications serve as a blueprint for developers to follow during the implementation phase, ensuring that the system meets the identified requirements.
  4. System Design
    System design involves the creation of a detailed plan for how the system will be built. This includes designing databases, user interfaces, and the overall architecture of the system. The design phase translates user requirements and technical specifications into a concrete plan that developers will use to build the system.
  5. System Development
    During the system development phase, programmers and developers begin coding the system according to the design specifications. This phase involves writing the application code, developing databases, and integrating different system components. The development phase is where the actual system takes shape based on the earlier design.
  6. System Testing
    After development, the system undergoes testing to ensure it functions as expected. System testing includes unit testing, which checks individual components; integration testing, which ensures different components work together; and system testing, which validates the entire system’s performance. This phase is critical to identifying and fixing bugs or errors before the system goes live.
  7. System Implementation
    Once the system has been developed and tested, it is implemented in a real-world environment. This phase includes installing the software, training users, and migrating any necessary data. System implementation ensures that the system is operational and users are equipped to use it effectively.
  8. System Maintenance
    After the system is implemented, ongoing maintenance is necessary to ensure it continues to function efficiently. This involves updating features, fixing any issues that arise, and making improvements as user needs evolve. System maintenance is a continuous process that ensures the system remains relevant and effective over time.

System Analysis and Design plays a critical role in ensuring that the systems developed meet user needs, operate efficiently, and can be managed effectively throughout their lifecycle. By following a structured approach, developers can deliver systems that are both functional and adaptable to changing requirements.

This methodology is integral to the success of any system development project, providing a clear roadmap from initial planning through to system maintenance.

Peranan AI dalam Pembangunan Perisian dan Aplikasi

Ai coding

Oleh Shahabuddin Amerudin

Kecerdasan Buatan (AI) kini menjadi salah satu teknologi teras dalam pembangunan perisian dan aplikasi, membawa revolusi dalam cara perisian dibina, diuji, dan diselenggara. Dengan kemajuan terkini dalam pembelajaran mesin, automasi, dan pemprosesan bahasa semula jadi (NLP), AI membantu mempercepatkan pembangunan kod, meningkatkan kecekapan pengujian perisian, dan memudahkan integrasi analitik pintar ke dalam aplikasi. Namun, penggunaan teknologi ini juga datang dengan cabaran, termasuk isu keselamatan, kebergantungan pada platform tertentu, dan potensi risiko kebergantungan kepada alat AI yang terlalu tinggi. Artikel ini akan mengupas bagaimana AI membantu dalam proses pembangunan perisian serta alat-alat terkini yang boleh digunakan, dengan memberi fokus kepada kelebihan, keburukan, risiko, dan cara mengatasi isu-isu tersebut.

AI dalam Penulisan Kod Automatik

Salah satu kegunaan AI yang paling meluas dalam pembangunan perisian adalah penulisan kod automatik. Contoh utama ialah GitHub Copilot, yang menggunakan model Codex, satu varian daripada GPT-3 yang dibangunkan oleh OpenAI. GitHub Copilot membantu pengaturcara dengan mencadangkan barisan kod semasa mereka menaip, berdasarkan konteks yang diberikan, serta memberikan penyelesaian kepada masalah sintaks atau logik yang mungkin dihadapi. Ini mempercepatkan pembangunan, terutamanya bagi pengaturcara yang baru mempelajari bahasa pengaturcaraan baru atau yang bekerja dalam projek besar yang memerlukan pengoptimuman masa. Namun, terdapat kebimbangan dari segi hak cipta kerana Copilot menggunakan data kod dari repositori terbuka, yang mungkin menyebabkan penggunaan kod tanpa izin (OpenAI, 2022).

Selain itu, perisian seperti Replit Ghostwriter turut menawarkan kemampuan penulisan kod automatik dengan membantu dalam melengkapkan kod dan debugging. Alat ini sesuai untuk pemula yang ingin mempercepatkan proses pembelajaran mereka dengan bantuan AI. Kelebihan terbesar perisian seperti ini adalah ia mempercepatkan proses pembangunan dan mengurangkan jumlah kesilapan kod semasa proses penulisan. Namun begitu, risiko yang signifikan adalah kebergantungan yang tinggi kepada cadangan AI tanpa pengaturcara memahami asas logik atau struktur kod tersebut, yang boleh membawa kepada pembinaan kod yang tidak cekap atau rentan (Replit, 2023).

AI dalam Ujian Perisian Automatik

Ujian perisian merupakan fasa kritikal dalam pembangunan, dan AI telah membuktikan peranannya dalam mempercepatkan proses ini. Alat seperti Testim menggunakan kecerdasan buatan untuk mencipta dan menjalankan ujian automatik. Alat ini bukan sahaja mengurangkan masa yang diperlukan untuk ujian, tetapi juga mengadaptasi dirinya mengikut perubahan dalam perisian. Selain itu, ia membantu dalam pengujian regresi dan memastikan perisian tetap stabil walaupun selepas banyak perubahan dibuat. Walaupun AI menawarkan cara yang lebih pantas dan lebih konsisten untuk menguji perisian, kelemahan utamanya adalah AI mungkin gagal mengesan beberapa isu kompleks yang hanya dapat dilihat melalui pengujian manual (Testim, 2023).

Perisian lain seperti Mabl turut menonjol sebagai alat ujian automatik yang dibantu AI. Mabl mampu mengenal pasti bug dan menjalankan analisis mendalam mengenai prestasi perisian. Kelebihannya ialah Mabl boleh digunakan untuk pengujian berterusan, memastikan kualiti perisian dipantau sepanjang kitaran pembangunan. Namun, satu cabaran yang timbul ialah kebergantungan kepada pengujian automatik boleh membawa kepada pengabaian pengujian manual yang lebih menyeluruh, terutama untuk aplikasi kompleks yang memerlukan ujian secara empirik (Mabl, 2023).

AI untuk Analitik dan Pembelajaran Mesin

Dalam domain pembelajaran mesin dan analitik, alat seperti TensorFlow telah menjadi pilihan utama bagi pembangunan model pembelajaran mesin dan pembelajaran mendalam (deep learning). TensorFlow adalah rangka kerja sumber terbuka yang menyokong pelbagai tugas seperti pemprosesan bahasa semula jadi, penglihatan komputer, dan analitik ramalan. Kelebihan utama TensorFlow ialah kebolehannya untuk menyokong model berskala besar yang memerlukan pemprosesan data yang kompleks. Ini menjadikan TensorFlow amat sesuai untuk aplikasi seperti pengenalan gambar, ramalan trend perniagaan, atau pengelasan data teks. Walaupun begitu, TensorFlow mempunyai keluk pembelajaran yang agak curam, menjadikannya lebih sesuai untuk pembangun yang mempunyai latar belakang yang kuat dalam AI dan pembelajaran mesin (TensorFlow, 2022).

Selain TensorFlow, Hugging Face menjadi platform utama bagi pemprosesan bahasa semula jadi (NLP). Hugging Face menyediakan model pra-latihan seperti GPT, BERT, dan RoBERTa, yang membolehkan pembangun membina aplikasi berasaskan teks dengan cepat dan cekap. Aplikasi NLP seperti chatbots, analisis sentimen, dan penerjemahan bahasa menjadi lebih mudah dengan bantuan model ini. Kelebihan utama alat ini adalah kemampuannya untuk menyesuaikan model-model sedia ada dengan data khusus tanpa memerlukan latihan model dari awal. Namun, satu cabaran yang mungkin dihadapi ialah model AI pra-latihan tidak selalu serasi sepenuhnya dengan semua jenis data, memerlukan penalaan lanjut bagi mencapai prestasi optimum (Hugging Face, 2023).

AI No-Code: Revolusi Pembangunan Aplikasi

Perkembangan AI juga telah mendorong kebangkitan platform no-code dan low-code, di mana sesiapa sahaja boleh membangunkan aplikasi tanpa perlu menulis kod. Platform seperti Bubble membolehkan pengguna membina aplikasi web interaktif dengan cepat dan mudah tanpa memerlukan pengalaman teknikal yang mendalam. AI diintegrasikan dalam platform ini untuk membantu pengguna menyesuaikan antaramuka pengguna (UI) dan mengotomasi beberapa proses pembangunan. Kelebihan no-code ialah ia membuka pintu kepada lebih ramai pembangun bukan teknikal untuk mencipta aplikasi, sekali gus mengurangkan halangan kemasukan ke dalam dunia pembangunan perisian (Bubble, 2023).

Walau bagaimanapun, no-code datang dengan beberapa kekangan. Platform no-code seperti OutSystems menawarkan kawalan terhad terhadap logik dalaman aplikasi, menjadikannya kurang sesuai untuk aplikasi yang memerlukan pengendalian data atau logik kompleks. Selain itu, masalah penguncian vendor (vendor lock-in) juga timbul kerana pengguna mungkin sukar untuk memindahkan aplikasi mereka ke platform lain jika terdapat keperluan untuk mengubah teknologi atau memperluasnya (OutSystems, 2023).

Kebaikan, Keburukan, dan Risiko Penggunaan AI dalam Pembangunan

Kebaikan utama penggunaan AI dalam pembangunan perisian adalah peningkatan kecekapan dan kelajuan. AI membantu mempercepatkan penulisan kod, mengurangkan masa pengujian perisian, dan membolehkan pembangunan aplikasi yang lebih pintar dan adaptif. Penggunaan AI dalam no-code juga membolehkan pengguna tanpa latar belakang teknikal untuk membangunkan aplikasi, sekali gus meningkatkan aksesibiliti dalam pembangunan perisian. Namun, keburukan utama yang berkaitan dengan AI adalah kebergantungan terlalu tinggi kepada sistem AI, yang boleh menyebabkan kehilangan kawalan terhadap kualiti dan keselamatan perisian. Pengguna mungkin gagal memahami logik asas yang diperlukan untuk pembangunan perisian yang cekap kerana terlalu bergantung kepada cadangan AI yang diberikan secara automatik (Rahwan et al., 2023).

Risiko keselamatan juga menjadi isu utama, terutama apabila AI digunakan dalam ujian perisian atau pembangunan no-code. Aplikasi yang dibangunkan mungkin mempunyai kerentanan yang tidak dikesan atau kod yang tidak dioptimumkan dengan baik. Penguncian vendor dalam platform no-code juga boleh menyulitkan proses migrasi aplikasi atau integrasi dengan sistem lain, menghalang skalabiliti jangka panjang aplikasi tersebut (Benfield, 2023).

Cadangan dan Penutup

Bagi mengatasi isu dan risiko yang dikaitkan dengan penggunaan AI dalam pembangunan perisian, beberapa pendekatan boleh diambil. Pertama, adalah penting untuk mengimbangi penggunaan AI dengan pengujian manual dan audit keselamatan yang ketat. Pembangun perlu memastikan bahawa aplikasi yang dibangunkan diuji secara teliti untuk sebarang kelemahan yang mungkin tidak dapat dikesan oleh AI. Kedua, platform no-code perlu dipilih dengan berhati-hati, dan sebaiknya yang menyokong API terbuka untuk memudahkan migrasi dan integrasi di masa hadapan. Ketiga, latihan dan pendidikan mengenai teknologi AI perlu diperluas supaya pengguna dapat memahami kekangan dan kelebihan AI, sekali gus mengelakkan kebergantungan sepenuhnya terhadap alat ini tanpa memahami asas pembangunan perisian (Hoffman, 2022).

Dengan pendekatan yang berhati-hati, AI berpotensi menjadi salah satu alat yang paling kompetitif dalam pembangunan perisian dan aplikasi, namun ia memerlukan penggunaan yang bijaksana untuk mengelakkan risiko yang berkaitan.


Rujukan:

Benfield, J. (2023). AI in software testing: The new frontierJournal of Software Engineering, 14(2), 99-112.

Bubble. (2023). No-code app development platformhttps://bubble.io

GitHub Copilot. (2022). AI-assisted codinghttps://github.com/features/copilot

Hoffman, A. (2022). Securing AI-driven software development: Challenges and solutions. AI & Society, 19(1), 54-72.

Hugging Face. (2023). Transformers for NLP applicationshttps://huggingface.co

Mabl. (2023). AI-powered continuous testing platformhttps://mabl.com

OpenAI. (2022). AI models and their use in code completionhttps://openai.com

Mengimbangi Peranan Universiti dan Industri dalam Pembangunan Teknologi

campus

Universiti sering dianggap sebagai pusat inovasi dan pembangunan teknologi. Di sinilah teori-teori baru diasah, penyelidikan mendalam dijalankan, dan teknologi baru direka serta diuji. Dalam konteks ini, universiti sewajarnya memainkan peranan sebagai pelopor dalam pembangunan teknologi. Berbanding industri yang fokus kepada keuntungan, universiti berfungsi sebagai landasan untuk penyelidikan jangka panjang tanpa batasan komersial yang ketara. Oleh itu, ada asas untuk menyatakan bahawa universiti perlu lebih maju dari segi teknologi, kerana mereka membentuk dan meneroka konsep yang kemudiannya boleh digunakan oleh industri.

Namun, realitinya tidak selalu begitu. Universiti kadang-kadang ketinggalan dalam teknologi praktikal yang digunakan oleh industri, disebabkan oleh beberapa faktor seperti bajet yang terhad, birokrasi, serta ketiadaan hubungan yang erat antara akademia dan industri. Universiti sering kali tertinggal dari sudut aplikasi kerana teknologi baru dalam industri berkembang pesat disebabkan persaingan pasaran dan dorongan untuk inovasi yang mendatangkan keuntungan. Contohnya, teknologi seperti kecerdasan buatan (AI), pembelajaran mesin, dan Internet Benda (IoT) berkembang dengan pesat di syarikat-syarikat teknologi sebelum universiti dapat membina kurikulum atau sistem pendidikan yang relevan dan menyeluruh.

Salah satu isu yang sering diketengahkan adalah jurang antara apa yang diajar di universiti dan keperluan industri sebenar. Banyak program universiti cenderung mengutamakan aspek teori berbanding aplikasi, menjadikan graduan kurang bersedia untuk menghadapi cabaran teknologi terkini di tempat kerja. Industri sering kali memerlukan teknologi praktikal yang dapat menyelesaikan masalah dengan segera, sedangkan universiti mungkin terperangkap dalam kajian teori yang memerlukan masa yang lama untuk berkembang menjadi sesuatu yang berguna dari segi komersial.

Namun, perbincangan ini harus adil, kerana misi utama universiti adalah untuk menghasilkan ilmu pengetahuan baru dan membangun teknologi untuk jangka masa panjang, bukan sekadar mengikuti arus perkembangan teknologi semasa. Penyelidikan di universiti selalunya lebih fundamental dan tidak serta-merta mempunyai aplikasi komersial, tetapi ia adalah asas kepada inovasi teknologi yang kemudian dikomersialkan oleh industri.

Untuk menyelesaikan masalah jurang teknologi antara universiti dan industri, kerjasama strategik perlu ditingkatkan. Universiti boleh memainkan peranan yang lebih penting dalam pembangunan teknologi melalui penyelidikan kolaboratif bersama industri. Ini dapat memastikan teknologi yang sedang dibangunkan di universiti selaras dengan keperluan semasa industri, sambil universiti juga dapat mengeksplorasi teknologi masa depan yang masih belum diterokai oleh industri. Contoh yang baik ialah model pembangunan inkubator teknologi yang melibatkan penyelidik akademik dan syarikat untuk membangunkan prototaip teknologi yang boleh diuji dan dikomersialkan.

Walaupun begitu, wujud masalah lain apabila kurangnya insentif bagi pensyarah dan penyelidik untuk terlibat dalam kerjasama industri, kerana sistem penilaian universiti lebih mengutamakan penerbitan akademik berbanding impak ekonomi atau teknologi yang dihasilkan. Akibatnya, teknologi yang dibangunkan di universiti mungkin terlewat memasuki pasaran atau tidak memenuhi keperluan industri semasa.

Isu lain yang mempengaruhi keupayaan universiti untuk mengungguli industri dari segi teknologi adalah keterbatasan sumber kewangan. Pembiayaan untuk penyelidikan dan pembangunan teknologi di universiti, khususnya di negara membangun, sering kali tidak mencukupi untuk membiayai pembelian teknologi terkini atau membangunkan makmal penyelidikan yang canggih. Sebaliknya, syarikat-syarikat besar mampu membiayai penyelidikan dan pembangunan mereka sendiri dan membeli peralatan teknologi terkini.

Universiti sepatutnya memainkan peranan lebih besar sebagai pembangun teknologi, bukan sekadar pengguna. Namun, realiti menunjukkan bahawa terdapat beberapa cabaran yang perlu diatasi, termasuk jurang antara teori dan aplikasi, kekurangan kerjasama dengan industri, dan kekangan pembiayaan. Walaupun ada beberapa universiti yang mampu mengungguli industri dari segi pembangunan teknologi (misalnya dalam bidang penyelidikan fundamental), kebanyakan universiti memerlukan pendekatan yang lebih strategik dan kolaboratif untuk memastikan teknologi mereka sentiasa relevan dan terkehadapan.

Universiti Sebagai Pelopor Teknologi dan Pusat Inovasi

building

Universiti adalah institusi yang dianggap sebagai benteng utama dalam pembangunan ilmu, teknologi, dan inovasi. Sejarah membuktikan bahawa universiti sering kali menjadi pelopor dalam bidangnya, mencipta teknologi baharu, dan menyediakan penyelesaian kepada pelbagai cabaran global. Namun, peranan ini kini dicabar oleh pelbagai faktor, terutamanya apabila universiti semakin bergantung kepada industri untuk sumber kewangan, teknologi, dan perkakasan. Sebaliknya, sepatutnya industri bergantung kepada universiti sebagai pusat kecemerlangan dan inovasi. Artikel ini akan mengupas pelbagai masalah ini serta membincangkan penyelesaian bagi mengembalikan universiti ke tempat yang selayaknya sebagai pusat rujukan utama.

Pada asasnya, universiti mesti memainkan peranan sebagai pelopor dalam bidang akademik dan teknologi. Ia seharusnya menjadi penggerak utama dalam membangunkan teknologi dan pendekatan baharu yang mempengaruhi industri dan masyarakat. Namun, apa yang kita saksikan hari ini ialah keadaan yang sebaliknya—di mana universiti perlu “berlajar” daripada industri, dan bukan sebaliknya. Fenomena ini timbul disebabkan oleh beberapa faktor, termasuk kekurangan dana, keterbatasan dalam pemilikan teknologi terkini, serta hubungan tidak seimbang antara universiti dan pihak industri.

Kekurangan sumber kewangan telah menjadi masalah yang semakin parah bagi kebanyakan universiti. Ketidakcukupan dana menyebabkan universiti terpaksa mengemis kepada pihak industri untuk mendapatkan bantuan dalam bentuk dana, perisian, dan perkakasan. Ini seterusnya mencetuskan ketergantungan terhadap pihak luar dan menghalang universiti daripada bertindak secara bebas sebagai pencipta teknologi.

Sebahagian industri pula menggunakan situasi ini sebagai peluang untuk menjadikan universiti sebagai tempat melupuskan perisian dan perkakasan lama yang tidak lagi relevan di dunia perniagaan. Sedangkan, universiti memerlukan teknologi terkini untuk membina keupayaan staf dan pelajar. Ini mencipta satu keadaan di mana universiti tidak dapat bersaing dengan industri dalam menyediakan persekitaran pengajaran dan pembelajaran yang moden dan setara dengan keperluan pasaran kerja.

Satu lagi isu kritikal ialah penggunaan perisian tanpa lesen yang sah oleh staf dan pelajar di universiti. Keadaan ini berlaku kerana universiti tidak mampu menyediakan perisian komersial yang terkini disebabkan oleh kekurangan kewangan. Walaupun kerajaan dan universiti telah menggalakkan penggunaan perisian sumber terbuka, ia tidak mencukupi untuk memenuhi keperluan kemahiran asas yang diperlukan dalam industri. Perisian sumber terbuka memang mempunyai kelebihan dari segi kos dan keterbukaan, tetapi kebanyakan syarikat besar dan sektor industri masih menggunakan perisian komersial dalam operasi harian mereka.

Perkara ini menimbulkan satu dilema di kalangan graduan yang memasuki pasaran kerja tanpa pengetahuan asas tentang perisian komersial yang kritikal dalam industri. Tanpa kemahiran ini, graduan universiti mungkin sukar bersaing dengan calon lain yang sudah mahir dalam penggunaan perisian tersebut. Oleh itu, walaupun inisiatif untuk menggunakan perisian sumber terbuka adalah baik, universiti masih perlu mengambil langkah untuk memastikan graduan mereka mampu menguasai perisian komersial yang sering digunakan di industri.

Memperkukuh sumber kewangan universiti merupakan langkah penting untuk memastikan autonomi dan kemampanan institusi pengajian tinggi. Kerajaan dan universiti harus mencari pelbagai inisiatif bagi menambah dana, termasuk kerjasama strategik dengan industri, namun tanpa terlalu bergantung kepada mereka. Salah satu cara untuk mencapai matlamat ini adalah melalui penyelidikan yang berkaitan dengan isu semasa atau cabaran yang dihadapi oleh industri, di mana universiti berfungsi sebagai penyedia penyelesaian inovatif. Selain itu, dana penyelidikan boleh diperkukuh melalui inisiatif kerjasama antarabangsa, sama ada dengan organisasi luar negara atau melalui projek-projek yang mendapat pembiayaan global.

Universiti juga perlu lebih berhati-hati dan strategik dalam menerima teknologi daripada pihak industri. Teknologi yang diterima harus melalui penilaian teliti untuk memastikan ia relevan dan dapat meningkatkan keupayaan pengajaran serta pembelajaran. Teknologi yang usang atau tidak lagi digunakan di dunia industri harus ditolak atau tidak diterima tanpa kajian mendalam. Peralihan teknologi seperti ini penting untuk menjamin universiti terus mengikuti perkembangan teknologi terkini dan tidak ketinggalan dalam arus perubahan industri.

Selain itu, universiti perlu mengimbangkan latihan dalam penggunaan perisian komersial dan sumber terbuka. Langkah ini dapat dilakukan melalui kerjasama dengan pembekal perisian komersial yang menawarkan lesen pendidikan dengan kos yang lebih rendah. Universiti juga boleh menyediakan kursus jangka pendek untuk memberi pendedahan kepada pelajar dan staf mengenai penggunaan kedua-dua jenis perisian, bagi memastikan graduan mereka mempunyai kemahiran yang bersesuaian dengan keperluan industri.

Pembangunan perisian dalaman juga perlu diberi perhatian. Universiti harus meningkatkan kemampuan untuk membangunkan perisian melalui pusat penyelidikan dan pembangunan (R&D). Ini akan membolehkan universiti mencipta perisian yang disesuaikan dengan keperluan pengajaran dan penyelidikan yang lebih moden. Dengan pendekatan ini, universiti tidak terlalu bergantung kepada perisian komersial yang mungkin mahal, tetapi sebaliknya membina keupayaan teknologi dalaman yang boleh menyokong keperluan akademik.

Dalam jangka masa panjang, universiti perlu mengubah budaya akademiknya dengan memberi tumpuan kepada inovasi dan pembangunan teknologi yang boleh diterjemahkan kepada aplikasi praktikal. Dengan cara ini, universiti bukan sahaja menjadi pusat penyebaran ilmu, tetapi juga pusat penciptaan teknologi baharu yang boleh digunakan oleh pihak industri. Universiti perlu mengambil peranan sebagai pemimpin dalam inovasi teknologi untuk memastikan mereka relevan dan berdaya saing dalam dunia akademik dan industri.

Kesimpulannya, untuk menjadikan universiti sebagai pelopor teknologi dan pusat inovasi, pelbagai langkah strategik perlu diambil. Ini termasuk memperkukuh sumber kewangan, menilai teknologi yang diterima, menyediakan latihan yang seimbang antara perisian komersial dan sumber terbuka, membangunkan perisian dalaman, serta mengubah budaya akademik agar lebih inovatif. Hanya dengan pendekatan ini, universiti akan dapat mengembalikan peranan mereka sebagai pusat kecemerlangan dan inovasi yang unggul.

Media Sosial dan GIS Untuk Pengumpulan dan Analisis Data Ruang

social media

Oleh Shahabuddin Amerudin

Pengenalan 

Dalam era digital ini, media sosial telah berkembang menjadi platform yang bukan sahaja digunakan untuk berinteraksi secara sosial, tetapi juga sebagai sumber data yang kaya untuk pelbagai analisis. Integrasi media sosial dengan Sistem Maklumat Geografi (GIS) membuka peluang besar dalam pelbagai sektor seperti pemantauan bencana, keselamatan, dan analisis alam sekitar. Dengan ciri geotag yang disertakan dalam kebanyakan platform media sosial seperti Twitter, Instagram, dan Facebook, data dapat dianalisis secara spatial untuk menghasilkan pemahaman yang lebih mendalam mengenai corak dan tren di lapangan.

Pemanfaatan GIS dan Media Sosial dalam Pengumpulan Data Ruang 

Penggunaan data geotag daripada media sosial membolehkan pengumpulan maklumat secara masa nyata. Setiap kali pengguna membuat kemas kini di media sosial, data seperti lokasi, masa, dan kandungan disertakan. Data ini boleh dimasukkan ke dalam GIS untuk menganalisis pelbagai aspek seperti aktiviti manusia, perubahan penggunaan tanah, dan tren sosial yang berkembang. Sebagai contoh, kajian oleh Resch et al. (2020) menunjukkan bahawa data dari Twitter boleh digunakan untuk memahami corak mobiliti bandar dan tingkah laku pengguna di lokasi tertentu.

Pemantauan Bencana Alam dengan Media Sosial dan GIS 

Salah satu aplikasi penting integrasi media sosial dengan GIS ialah dalam pemantauan dan respons terhadap bencana alam. Sebagai contoh, apabila bencana seperti banjir atau gempa bumi berlaku, ramai pengguna media sosial melaporkan situasi tersebut melalui platform seperti Twitter atau Facebook. Dengan menggunakan alat GIS, laporan ini dapat dipetakan untuk menyediakan gambaran tentang kawasan yang terjejas. Ini membantu agensi penyelamat dalam menentukan kawasan yang memerlukan bantuan segera dan meningkatkan kecekapan dalam pengurusan bencana. Kajian oleh Crooks, Croitoru, dan Stefanidis (2013) menunjukkan bahawa media sosial boleh menyediakan maklumat awal yang tidak terdapat dalam sumber tradisional semasa bencana alam. Sebagai contoh, semasa Taufan Sandy melanda Amerika Syarikat pada 2012, banyak maklumat bencana diperoleh daripada media sosial yang membantu dalam merancang tindakan balas yang pantas.

Analisis Persepsi Awam Menggunakan GIS dan Media Sosial 

GIS juga dapat digunakan untuk memahami persepsi awam terhadap sesuatu tempat atau peristiwa. Sentimen yang dikongsi di media sosial boleh dianalisis menggunakan GIS untuk menilai bagaimana pendapat awam berbeza berdasarkan lokasi. Data ini sangat berguna untuk pemantauan persepsi terhadap pembangunan bandar, pemuliharaan alam sekitar, atau sebarang isu sosial yang mendapat perhatian. Ghaffarian et al. (2022) menggunakan data media sosial untuk memahami sentimen awam terhadap pembangunan lestari di kawasan bandar. GIS digunakan untuk memetakan sentimen tersebut dan melihat perbezaan persepsi antara kawasan bandar dan luar bandar.

Pembangunan Pelancongan dan Pemasaran Tempatan 

Data geospatial dari media sosial boleh dimanfaatkan dalam bidang pelancongan. Melalui penggunaan GIS, lokasi yang sering disebut atau dikunjungi oleh pengguna media sosial dapat dianalisis untuk mengenal pasti kawasan tarikan pelancong yang popular. Pihak berkuasa tempatan dan agensi pelancongan boleh menggunakan maklumat ini untuk merancang strategi pemasaran yang lebih baik serta memperbaiki infrastruktur di lokasi-lokasi pelancongan yang popular. Kajian oleh Sigala (2018) membuktikan bahawa integrasi GIS dan data media sosial memainkan peranan penting dalam pemetaan destinasi pelancongan serta dalam perancangan strategi pemasaran digital.

Penglibatan Komuniti dan Kesedaran Awam melalui Media Sosial 

Penglibatan komuniti adalah aspek penting dalam memastikan kejayaan sesuatu projek, terutamanya yang melibatkan aktiviti pemetaan atau pemantauan alam sekitar. Melalui media sosial, GIS dapat digunakan untuk menarik minat masyarakat menyertai aktiviti seperti pemetaan komuniti (crowdsourcing) atau pemantauan persekitaran. Sebagai contoh, dalam projek pemantauan alam sekitar, pengguna media sosial dapat diarahkan untuk memuat naik gambar atau video dari lokasi tertentu yang boleh membantu pihak berkuasa memantau perubahan dalam alam sekitar. Barve et al. (2020) menunjukkan bagaimana data daripada media sosial boleh digunakan untuk pemetaan biodiversiti di kawasan-kawasan tertentu, dengan melibatkan komuniti dalam proses pengumpulan data.

Kesimpulan 

Penggunaan media sosial bersama GIS memberikan peluang yang signifikan untuk pengumpulan dan analisis data ruang secara lebih dinamik dan masa nyata. Dari pemantauan bencana hingga kepada analisis persepsi awam, teknologi ini mempercepatkan proses pengambilan keputusan dan memperkukuh perancangan berdasarkan data yang lebih tepat dan mendalam. Dalam persekitaran yang semakin pantas berubah, pendekatan ini bukan sahaja membantu dalam memahami corak semasa, malah membantu dalam penyediaan respons yang lebih cepat dan berkesan.

Rujukan

  • Barve, V., Brenskelle, L., Li, D., Stucky, B. J., Barve, N., Hantak, M. M., … & Guralnick, R. P. (2020). Methods for broad‐scale biodiversity analyses using open‐access data. Nature Ecology & Evolution, 4(3), 294-305.
  • Crooks, A., Croitoru, A., & Stefanidis, A. (2013). # Earthquake: Twitter as a Distributed Sensor System. Transactions in GIS, 17(1), 124-147.
  • Ghaffarian, A., Khamis, M. Z., Abdul Rashid, Z., & Alias, N. (2022). Public sentiment analysis for sustainable urban development using GIS and social media data. Journal of Urban Planning and Development, 148(3), 04021054.
  • Resch, B., Summa, A., Sagl, G., Zeile, P., & Exner, J. P. (2020). Urban Emotions—Geo‐semantic emotion extraction from crowdsourced data and its application in urban planning. Journal of Geographic Information Science, 29(3), 256-273.
  • Sigala, M. (2018). Social media and the co-creation of tourism experiences. Tourism Management Perspectives, 12, 134-147.

Teknologi AI dan Pembelajaran Mesin untuk Memantau dan Menyaring Kandungan Internet

AiDNS

Dalam era digital yang semakin maju, isu penyebaran kandungan haram seperti pornografi, perjudian, dan ekstremisme melalui laman sesawang menjadi cabaran besar kepada kerajaan dan pihak berkuasa. Langkah-langkah untuk menyekat akses kepada laman-laman ini perlu diambil dengan teliti agar tidak menjejaskan kebebasan pengguna dan kandungan sah di internet. Salah satu pendekatan yang lebih inovatif dan berkesan berbanding kaedah penghalaan semula DNS ialah penggunaan teknologi kecerdasan buatan (AI) dan pembelajaran mesin. Teknologi ini menawarkan penyelesaian yang lebih spesifik, dengan keupayaan untuk mengenal pasti dan menyaring kandungan secara automatik, sambil meminimumkan risiko pelanggaran privasi dan kebebasan bersuara. Artikel ini akan membincangkan pelaksanaan teknologi AI dalam pemantauan kandungan digital dan langkah-langkah yang perlu diambil untuk memastikan keberkesanannya tanpa menjejaskan hak pengguna.

Untuk melaksanakan cadangan penggunaan teknologi kecerdasan buatan (AI) dan pembelajaran mesin dalam memantau serta menyaring kandungan secara automatik, beberapa langkah penting perlu diambil bagi memastikan sistem ini berfungsi dengan cekap dan berkesan. Proses pertama ialah membangunkan model AI dan pembelajaran mesin yang mampu mengenal pasti kandungan yang melanggar undang-undang secara automatik. Ini bermula dengan pengumpulan data latihan yang merangkumi pelbagai contoh laman web, imej, video, dan teks yang mengandungi kandungan yang menyalahi undang-undang seperti pornografi, judi, serta kandungan ekstremis. Data ini boleh diperoleh daripada rekod kerajaan, syarikat keselamatan siber, dan sumber-sumber lain yang sah. Setelah data diperoleh, langkah seterusnya adalah melatih model AI untuk membezakan antara kandungan sah dan tidak sah. Proses latihan ini melibatkan teknik pembelajaran mesin seperti supervised learning dan deep learning yang memerlukan input daripada pakar undang-undang, keselamatan siber, dan pengaturcaraan. Model AI yang dibangunkan perlu sentiasa dipantau dan diperbaharui dengan data terkini. Algoritma ini akan dikemaskini secara berkala untuk mengadaptasi dengan kandungan baharu dan teknik baharu yang digunakan oleh pelanggar undang-undang.

Setelah model AI dan pembelajaran mesin dibangunkan, ia perlu diintegrasikan ke dalam infrastruktur rangkaian internet negara melalui kerjasama dengan Penyedia Perkhidmatan Internet (ISP) dan syarikat telekomunikasi. AI yang dibangunkan boleh diintegrasikan dengan sistem DNS untuk mengenal pasti dan menyekat akses kepada laman sesawang yang menyalahi undang-undang tanpa menjejaskan laman sesawang sah. Selain itu, proksi boleh digunakan untuk memantau permintaan trafik internet dan menentukan sama ada ia perlu disekat atau diluluskan. Penyedia perkhidmatan internet (ISP) dan syarikat telekomunikasi perlu dilibatkan secara langsung untuk memastikan pelaksanaan sistem ini berkesan. Mereka perlu membina infrastruktur yang boleh menyokong penggunaan AI dalam masa sebenar untuk menapis kandungan.

Teknik content filtering berasaskan AI boleh digunakan untuk menyaring kandungan secara automatik tanpa memerlukan campur tangan manusia. AI boleh menyaring berdasarkan jenis kandungan yang berbahaya seperti kandungan teks, imej, video, dan audio. Dalam penyaringan kandungan teks, AI boleh mengenal pasti teks yang mengandungi kata kunci atau frasa tertentu yang melanggar undang-undang, seperti promosi aktiviti perjudian atau kandungan lucah. Algoritma natural language processing (NLP) digunakan untuk menganalisis kandungan teks dalam pelbagai bahasa dan konteks. Untuk penyaringan kandungan imej dan video, AI yang dilengkapi dengan teknologi computer vision boleh mengenal pasti imej dan video yang mengandungi kandungan haram seperti pornografi. Model image classification dan object detection akan digunakan untuk memantau kandungan visual secara automatik. Kandungan audio seperti podcast atau rakaman suara yang mengandungi unsur ekstremisme atau hasutan juga boleh dikenalpasti oleh AI melalui analisis suara dan transkripsi.

Untuk mengelakkan risiko pelanggaran privasi, sistem AI perlu direka bentuk dengan mematuhi garis panduan keselamatan dan privasi yang ketat. Data pengguna perlu dianonimkan sebelum dianalisis oleh AI, bagi memastikan maklumat peribadi tidak dapat dikenalpasti oleh sistem. Teknik seperti data masking dan encryption boleh digunakan untuk tujuan ini. Pihak kerajaan dan badan bebas juga perlu melakukan audit secara berkala untuk memastikan tiada penyalahgunaan kuasa berlaku. Sistem ini harus disemak untuk mengelakkan sebarang campur tangan politik atau penyekatan terhadap laman sesawang yang sah dan tidak menyalahi undang-undang.

Sebagai sebahagian daripada langkah untuk memastikan kebebasan digital dan pilihan pengguna, kerajaan perlu menyediakan pilihan kepada rakyat untuk menggunakan perkhidmatan DNS yang mereka mahu, termasuk perkhidmatan DNS pihak ketiga seperti Google dan Cloudflare. Ini akan memastikan pengguna mempunyai akses kepada sistem yang lebih pantas dan boleh dipercayai, sementara kerajaan dan ISP tempatan terus meningkatkan kualiti perkhidmatan DNS mereka. Selain itu, untuk menyokong pelaksanaan AI dalam memantau kandungan dalam talian, kerajaan perlu menggubal undang-undang dan dasar yang jelas mengenai peranan dan had penggunaan AI. Undang-undang ini perlu melindungi hak-hak pengguna sambil memastikan kandungan yang menyalahi undang-undang disekat dengan cekap. Kerjasama dengan pakar teknologi, pihak swasta, dan badan antarabangsa adalah penting untuk membangunkan garis panduan yang melindungi kebebasan bersuara dan privasi pengguna, sambil mengekang kandungan berbahaya.

Akhir sekali, kerajaan perlu melancarkan kempen kesedaran untuk mendidik rakyat tentang peranan AI dalam memantau kandungan internet, serta hak-hak mereka sebagai pengguna. Ini akan mengelakkan salah faham dan kebimbangan berhubung privasi dan kebebasan dalam talian, serta memastikan orang ramai memahami manfaat sistem yang lebih selamat dan cekap. Secara keseluruhannya, penggunaan teknologi kecerdasan buatan dan pembelajaran mesin untuk menyaring kandungan secara automatik adalah kaedah yang lebih spesifik dan berkesan berbanding penghalaan semula DNS. Dengan latihan model AI yang berterusan, integrasi dengan ISP, perlindungan privasi, serta undang-undang yang jelas, sistem ini dapat memastikan kandungan haram disekat tanpa menjejaskan kebebasan dan keselamatan digital pengguna. Pelaksanaan ini juga perlu disokong dengan infrastruktur internet yang pantas, serta pendidikan yang berterusan kepada pengguna untuk memastikan pemahaman yang tepat terhadap dasar-dasar yang diperkenalkan.

Skim Perkhidmatan Jurugeospatial dan Juruukur dalam Perkhidmatan Awam

malaysia

Oleh Shahabuddin Amerudin

Di dalam Perkhidmatan Awam Malaysia, skim perkhidmatan Jurugeospatial dan Juruukur memainkan peranan penting dalam bidang pemetaan dan pengukuran. Walaupun terdapat perbezaan ketara antara kedua-dua skim ini, terdapat juga beberapa persamaan yang penting. Artikel ini akan menerangkan perbezaan dan persamaan antara kedua-dua skim perkhidmatan ini dengan memberikan beberapa contoh bidang tugas mereka.

Skim perkhidmatan Jurugeospatial, yang terdiri daripada 31 perjawatan, mempunyai tanggungjawab yang unik dalam pengurusan data geospatial dan penyediaan peta digital. Tugas utama mereka termasuk reka bentuk peta elektronik, peta tematik, dan peta topografi. Skop tugas ini melibatkan penyediaan maklumat geospatial yang diperlukan untuk pelbagai tujuan kerajaan seperti pertahanan, pembangunan negara, pengurusan sumber, pendidikan, dan pentadbiran. Sebagai contoh, Jurugeospatial mungkin terlibat dalam pembangunan peta digital untuk aplikasi ketenteraan atau perancangan bandar yang memerlukan data terkini dan terperinci. Ini menunjukkan bahawa peranan mereka lebih berfokus pada penghasilan dan analisis data geospatial menggunakan teknologi GIS (Geographic Information System) yang canggih.

Sebaliknya, skim perkhidmatan Juruukur terlibat dalam pelaksanaan kerja-kerja pengukuran yang lebih teknikal dan tradisional. Tugas mereka melibatkan pengukuran topografi, geodetik, kadaster, dan utiliti. Ini termasuk ukuran terabas kawalan piawai, ukuran kawalan, ukuran dinding dua tuan, ukuran pengambilan balik tanah, dan ukuran kawasan bandar serta luar bandar. Contoh praktikal bagi Juruukur adalah pengukuran untuk pembangunan infrastruktur seperti lebuh raya atau bandar baru, di mana ketepatan pengukuran adalah sangat kritikal. Ini menunjukkan bahawa Juruukur terlibat dalam aspek fizikal dan teknikal pengukuran yang memerlukan ketepatan tinggi dalam pemetaan dan dokumentasi tanah serta struktur.

Walaupun terdapat perbezaan dalam skop tugas dan kelayakan, terdapat juga beberapa persamaan penting antara kedua-dua skim perkhidmatan ini. Pertama, kedua-dua skim terlibat dalam pemetaan dan pengukuran yang menyumbang kepada pembangunan negara. Jurugeospatial menyediakan data yang boleh digunakan untuk analisis sumber dan perancangan, sementara Juruukur memastikan pengukuran yang tepat bagi kawasan yang berkaitan dengan penggunaan sumber seperti tanah pertanian, kawasan perlombongan, dan infrastruktur. Kedua-duanya menyokong objektif pembangunan negara melalui penghasilan data dan maklumat yang diperlukan untuk pelaksanaan projek-projek pembangunan.

Selain itu, Jurugeospatial dan Juruukur mungkin bekerjasama dalam projek-projek besar. Dalam banyak kes, mereka terlibat dalam projek pembangunan yang memerlukan integrasi antara peta digital dan pengukuran fizikal. Contohnya, dalam pembangunan sebuah bandar baru, Jurugeospatial mungkin menghasilkan peta digital dan analisis data untuk perancangan bandar, manakala Juruukur melakukan pengukuran fizikal untuk memastikan reka bentuk dan pembinaan adalah tepat. Kerjasama ini memastikan bahawa semua aspek projek dipertimbangkan dan dilaksanakan dengan baik.

Dari segi kelayakan, Jurugeospatial biasanya memerlukan latar belakang akademik dalam bidang geoinformatik, GIS, atau sistem maklumat geografi. Ini mungkin termasuk ijazah sarjana muda atau sarjana dalam bidang yang berkaitan. Mereka juga memerlukan latihan tambahan dalam perisian GIS dan teknologi pemetaan digital untuk melaksanakan tugas mereka dengan berkesan. Sebaliknya, Juruukur memerlukan kelayakan dalam bidang ukur, seperti ijazah dalam ukur tanah atau geomatik. Mereka perlu mempunyai pengetahuan mendalam mengenai standard pengukuran dan teknik pemetaan yang digunakan dalam industri. Latihan praktikal dalam penggunaan alat pengukuran dan perisian pemetaan adalah penting untuk memastikan ketepatan dan keberkesanan kerja mereka.

Secara keseluruhan, walaupun skim perkhidmatan Jurugeospatial dan Juruukur mempunyai skop kerja dan kelayakan yang berbeza, mereka berkongsi beberapa persamaan dalam cara mereka menyumbang kepada bidang pemetaan dan pengukuran. Kedua-dua skim ini memainkan peranan penting dalam sokongan terhadap pembangunan negara melalui penghasilan data yang tepat dan berkualiti serta melalui kerjasama dalam projek-projek besar yang memerlukan integrasi antara peta digital dan pengukuran fizikal.

Bayang di Sebalik Legasi

surveyor

Oleh Shahabuddin Amerudin

Angin malam terasa hangat, tetapi suasana di restoran itu sedikit tegang. Di sekeliling meja, wajah-wajah rakan lama yang dulu penuh keyakinan kini dibayangi kerutan usia dan kerisauan. Majlis perkahwinan anak rakan sekelas bertukar menjadi tempat pertemuan tak rasmi untuk menghidupkan kembali kenangan lama. Gelak ketawa yang tadi memenuhi ruangan mula perlahan, digantikan dengan perbualan serius tentang isu yang lebih besar.

“Eh, korang dengar tak pasal pindaan Akta Juruukur Tanah Berlesen 1958 yang baru diluluskan?” Rizal tiba-tiba mencelah, suaranya sedikit berat.

Semua pandang antara satu sama lain. Topik itu hangat diperkatakan bulan lalu. Ramai yang marah, tak kurang juga yang keliru. Tapi seperti biasa, riuh sekejap, lepas tu senyap. Masing-masing sudah terbiasa dengan permainan politik. Aku hanya tersenyum tipis, cuba menahan diri daripada terlibat terlalu awal dalam perbualan ini. Namun, Rizal tak puas hati dengan senyapku.

“Kau senyum apa, bro? Tak rasa apa-apa ke pasal pindaan akta tu?” dia mendesak.

Aku meletakkan cawan teh yang sudah suam di atas meja perlahan. Mataku bertemu pandangan mereka satu per satu, cuba membaca apa yang sebenarnya mereka mahu dengar. Aku tahu, di balik senyuman dan riak tenang itu, ada kerisauan yang lebih besar.

“Pada aku,” aku mula, suara perlahan tetapi jelas, “Pindaan tu bagus, tapi… ada perkara lain yang lebih besar yang patut kita risaukan.”

Rizal mengerutkan dahi, “Lebih besar? Apa yang lebih besar dari akta ni? Ni kan undang-undang yang akan ubah cara kita bekerja.”

Aku senyum lagi, kali ini lebih sinis. Aku tahu apa yang bermain di fikiran mereka. Mungkin ada yang takutkan perubahan, takutkan kehilangan pendapatan atau status. Tapi itu bukanlah yang mengganggu fikiran aku.

“Aku tanya korang ni, berapa lama lagi korang rasa boleh turun site buat kerja? Setahun? Lima tahun?” Soalan itu keluar perlahan tetapi menusuk.

Sejenak, suasana senyap. Wajah-wajah yang tadi yakin mula berubah. Ada yang menjeling, ada yang tunduk. Tak siapa yang berani menyahut.

“Kita semua tahu kerja ni bukan main-main. Mata dah mula kabur, nafas makin pendek, tapi kerja kita masih sama. Redah paya, panjat bukit. Kau rasa kau mampu buat ni lagi berapa lama?” soalku lagi, kali ini lebih tajam.

Rizal cuba bersuara, tetapi suaranya terhenti di tengah jalan. Masing-masing tahu apa yang aku maksudkan. Kita sudah tak muda lagi. Dan realitinya, kerja ini bukan untuk orang tua.

Perbualan kami bertukar arah. Kisah tentang anak-anak mula dibuka, mengalihkan sedikit rasa berat di dada. Ada yang bangga anaknya kini jadi doktor, jurutera, arkitek. Masing-masing senyum lebar ketika bercerita tentang kejayaan zuriat mereka.

Tapi aku tahu, ada soalan yang akan membuatkan senyuman itu hilang. Aku lemparkan soalan itu tanpa amaran.

“Ada anak korang yang minat nak jadi surveyor macam kita?”

Seperti yang aku jangka, suasana bertukar sunyi. Hanya bunyi desiran kipas yang kedengaran. Mata-mata yang tadi ceria kini tunduk, hilang arah. Aku lihat mereka saling menjeling, menunggu ada yang berani bersuara. Tapi, tak ada. Semuanya tahu jawapan yang tersimpan di hati masing-masing. Tak ada anak-anak mereka yang mahu jadi surveyor. Kenapa?

Aku sambung, “Kita tak boleh salahkan mereka. Surveyor bukan kerja glamour. Anak-anak sekarang hidup dengan TikTok, Instagram. Kerja kita? Redah hutan, berpanas. Tak ada apa yang menarik untuk mereka. Kisah sedih je yang muncul kat feed mereka.”

Rizal menyandarkan badannya ke kerusi, menarik nafas dalam. “Betul jugak tu. Anak aku tak minat pun. Padahal, aku ni Juruukur Tanah Berlesen.”

Aku mengangguk. Itulah yang menghantui kami semua. Anak-anak tak mahu meneruskan legasi ini. Bahkan, pelajar IPTA yang ambil jurusan geomatik pun bukan kerana minat. Kebanyakannya dipaksa atau hanya untuk mencukupkan kuota.

“Bila aku tanya student-student ni, kenapa ambil geomatik? Jawapan mereka, sebab ayah suruh. Ada yang jawab, ‘ayah surveyor’. Tapi, minat tak? Semua diam,” aku berkongsi pengalaman, menggeleng kepala.

Mata Rizal semakin suram. Dia tahu apa yang aku cuba sampaikan. Masa depan kerjaya ini makin kabur. Jika kita tak tanamkan minat dalam generasi muda, dalam beberapa tahun lagi, siapa yang akan teruskan kerja ini? Warga asing? Bukan anak-anak kita.

“Anak kau sendiri ada yang minat kerja survey?” Rizal tanya, dengan nada yang hampir berbisik, seakan tak mahu dengar jawapannya sendiri.

Aku diam, menelan segala yang terbuku di dada. Mataku terarah ke luar jendela, ke arah kegelapan malam yang tak menjanjikan apa-apa. Aku tahu jawapannya, tapi aku letih. Letih dengan kenyataan yang aku perlu terima.

“Dah la… tukar topik lain,” aku jawab, akhirnya.

Senyap.

Peranan dan Tanggungjawab Pusat Geospatial Negara Terhadap Kualiti dan Ketepatan Data Geospatial

cabaran PSM UTM

Oleh Shahabuddin Amerudin

Pusat Geospatial Negara (PGN) bertanggungjawab untuk memastikan data geospatial yang dikumpul dan disebarkan oleh agensi-agensi kerajaan serta swasta di Malaysia mencapai tahap kualiti dan ketepatan yang tinggi. Kualiti data ini adalah kritikal kerana data geospatial digunakan dalam pelbagai sektor termasuk perancangan bandar, pengurusan sumber alam, pemantauan bencana, dan pembangunan infrastruktur. Oleh itu, PGN telah membangunkan pelbagai mekanisme dan standard untuk memastikan bahawa data yang diterima dan disebarkan memenuhi keperluan yang ditetapkan.

1. Kayu Ukur Kualiti dan Ketepatan Data Geospatial

PGN menetapkan beberapa kayu ukur yang digunakan untuk menilai kualiti dan ketepatan data geospatial. Salah satu aspek yang dinilai adalah keseragaman format data. Data geospatial perlu diseragamkan dalam format tertentu seperti shapefiles, GeoJSON, atau format lain yang diiktiraf secara global untuk memastikan integrasi yang mudah dengan data lain. Selain itu, ketepatan posisi dan atribut juga menjadi fokus utama, di mana data yang dihantar perlu mempunyai koordinat geografi yang tepat serta maklumat atribut yang betul.

Ketepatan waktu (temporal accuracy) juga merupakan elemen penting dalam penilaian kualiti data. Data yang digunakan perlu relevan dengan tempoh masa tertentu, terutama dalam konteks seperti pengurusan bencana atau perancangan bandar. Di samping itu, konsistensi dan kesempurnaan data dinilai untuk memastikan tiada data yang hilang atau tidak lengkap yang boleh menjejaskan analisis. Akhir sekali, PGN juga menilai kesesuaian penggunaan (fitness for use), iaitu sejauh mana data sesuai untuk digunakan dalam konteks tertentu.

Kayu ukur ini berpandukan kepada standard antarabangsa seperti ISO 19115 untuk metadata geospatial dan standard yang ditetapkan oleh Open Geospatial Consortium (OGC) untuk interoperabiliti data geospatial. Dengan menggunakan standard ini, PGN dapat memastikan data yang diterima adalah berkualiti dan boleh dipercayai.

2. Prosedur Pemantauan Kualiti Data Geospatial

Proses pemantauan kualiti data geospatial oleh PGN melibatkan beberapa langkah penting. Pertama, setiap data yang dihantar oleh agensi kerajaan atau swasta perlu melalui validasi awal di mana aspek seperti format, ketepatan posisi, ketepatan waktu, dan atribut data diperiksa. Validasi awal ini adalah langkah penting untuk memastikan hanya data yang memenuhi kriteria asas diterima.

PGN juga menggunakan perisian pengesahan data yang direka khas untuk memeriksa integriti dan keserasian data. Perisian ini mampu mengenal pasti sebarang ketidakseragaman atau ketidaktepatan dalam data yang dihantar. Sebagai tambahan, PGN menjalankan audit berkala terhadap data geospatial yang disimpan dalam pangkalan data mereka untuk memastikan data tersebut sentiasa memenuhi standard yang ditetapkan. Audit ini melibatkan pemeriksaan data secara menyeluruh oleh pasukan pakar dalam bidang Sistem Maklumat Geografi (GIS) dan pengurusan data.

Selain pengesahan teknikal, data juga dinilai melalui ulasan pakar yang akan memastikan bahawa data yang diterima adalah relevan dan sesuai digunakan dalam konteks yang diperlukan. Langkah-langkah ini memastikan bahawa data geospatial yang dikongsi adalah berkualiti tinggi dan boleh digunakan dengan yakin oleh pelbagai pihak.

3. Tindakan terhadap Ketidakpatuhan Standard

Sekiranya data yang dihantar tidak memenuhi standard yang ditetapkan, PGN mengambil beberapa langkah tindakan. Langkah pertama adalah memberikan maklumbalas kepada penghantar data dengan menyatakan aspek-aspek yang tidak mematuhi standard serta memberikan cadangan untuk pembetulan. Ini adalah langkah yang penting bagi memastikan agensi yang menghantar data memahami keperluan yang perlu dipenuhi.

Dalam kes-kes tertentu, PGN juga menawarkan bantuan teknikal atau latihan kepada agensi penghantar untuk membantu mereka mencapai standard yang ditetapkan di masa hadapan. Ini merupakan pendekatan yang proaktif untuk memastikan bahawa semua agensi dapat menghasilkan data geospatial yang berkualiti. Jika ketidakpatuhan adalah serius, PGN berhak untuk menolak data tersebut daripada dimasukkan ke dalam pangkalan data negara. Ini adalah langkah yang tegas untuk memastikan integriti data geospatial negara.

4. Pembangunan Kemahiran dan Pengetahuan Staf PGN

Untuk melaksanakan tugas pemantauan dengan berkesan, PGN perlu memastikan bahawa staf mereka mempunyai kemahiran teknikal dan pengetahuan yang tinggi. Oleh itu, latihan berterusan adalah penting untuk memastikan staf sentiasa dikemaskini dengan teknologi terkini dalam GIS, pengurusan data, dan standard kualiti data geospatial. Latihan ini boleh melibatkan kursus dalam talian, bengkel, dan persidangan yang berkaitan.

PGN juga boleh menggalakkan staf mereka untuk memperoleh pensijilan profesional dalam bidang geospatial, seperti Certified Geographic Information Systems Professional (GISP) atau pensijilan daripada Open Geospatial Consortium (OGC). Sijil-sijil ini bukan sahaja meningkatkan kredibiliti dan keyakinan terhadap keupayaan staf, tetapi juga memastikan mereka memiliki kemahiran yang diiktiraf secara global. Selain daripada itu, kerjasama dengan institusi pengajian tinggi seperti Universiti Teknologi Malaysia (UTM) dalam bidang geoinformatik akan memberi manfaat besar kepada PGN dalam mengakses sumber pengetahuan yang terkini serta mendapatkan input teknikal dari pakar dalam bidang tersebut.

Di samping itu, adalah penting untuk memastikan bahawa kelayakan asas staf PGN adalah dalam bidang geoinformatik sama ada di peringkat diploma, ijazah sarjana muda, ijazah lanjutan, atau doktor falsafah. Ini akan memastikan bahawa staf mempunyai pengetahuan mendalam tentang konsep-konsep asas dan aplikasi geoinformatik yang kritikal bagi pelaksanaan tugas-tugas mereka di PGN. Dengan memastikan kemasukan staf ke PGN berdasarkan kelayakan yang khusus dalam bidang ini, organisasi dapat mengekalkan standard profesional yang tinggi dan meminimumkan risiko kesilapan atau ketidakpatuhan terhadap standard yang ditetapkan.

5. Memastikan Kualiti Data yang Dibekalkan kepada Pengguna

Sebagai organisasi yang bertanggungjawab untuk menyediakan data geospatial kepada pelbagai pihak berkepentingan, PGN perlu memastikan bahawa data yang disediakan adalah berkualiti tinggi dan sesuai digunakan. Untuk itu, PGN perlu melaksanakan proses pengesahan data yang berterusan sebelum data tersebut dibekalkan kepada pengguna. Ini termasuk pemeriksaan ketat untuk memastikan data memenuhi keperluan pengguna.

Selain itu, PGN perlu menyediakan dokumentasi dan metadata yang lengkap untuk setiap set data yang dihantar kepada pengguna. Dokumentasi ini penting untuk membantu pengguna memahami konteks, ketepatan, dan batasan data yang diterima. Untuk meningkatkan kualiti perkhidmatan, PGN juga perlu mengumpul maklumbalas pengguna mengenai kualiti data yang dibekalkan. Maklumbalas ini boleh digunakan untuk memperbaiki proses pemantauan dan penilaian kualiti data di masa hadapan.

Kesimpulan

PGN memainkan peranan yang sangat penting dalam memastikan data geospatial di Malaysia adalah berkualiti tinggi dan boleh dipercayai. Dengan menggunakan standard yang ketat, prosedur pemantauan yang komprehensif, dan pelaburan dalam pembangunan kemahiran staf, PGN dapat memastikan bahawa data geospatial yang dihantar oleh pelbagai agensi adalah tepat, relevan, dan sesuai digunakan. Langkah-langkah ini adalah penting untuk mengelakkan masalah “rubbish in, rubbish out” yang boleh menjejaskan kecekapan pengurusan dan perancangan negara. Dengan melaksanakan strategi-strategi ini, PGN dapat memastikan bahawa data geospatial di Malaysia terus menjadi aset yang bernilai tinggi dalam pembangunan negara.

Rujukan:

  1. International Organization for Standardization (ISO). (2019). ISO 19115: Geographic information — Metadata. ISO.
  2. Open Geospatial Consortium (OGC). (2020). OGC Standards. Diakses dari https://www.ogc.org/standards.
  3. Jabatan Ukur dan Pemetaan Malaysia (JUPEM). (2023). Laporan Tahunan PGN 2022. JUPEM.
  4. United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM). (2018). Integrated Geospatial Information Framework. United Nations.

Mobile GIS Software: Advancements and Applications

mobile GIS

By Shahabuddin Amerudin

Abstract

Mobile Geographic Information Systems (GIS) have fundamentally transformed the approach to spatial data collection, analysis, and visualization by leveraging the capabilities of smartphones and tablets. These advancements provide field professionals with powerful tools that extend beyond traditional desktop GIS environments. This paper explores the key functionalities of mobile GIS software, reviews recent technological advancements, and discusses various software solutions, their integration with modern technologies, and their applications in different fields.

1. Introduction

Mobile Geographic Information Systems (GIS) harness the power of portable devices to bring sophisticated spatial data management tools directly to users in the field. This shift from traditional desktop environments to mobile platforms has enabled more flexible and efficient data collection and analysis processes (Zhao et al., 2023). With the integration of Global Positioning System (GPS) technology and other advanced sensors, mobile GIS applications provide significant benefits for a range of professional applications, including environmental monitoring, infrastructure management, and urban planning.

2. Key Functionalities of Mobile GIS Software

2.1 Field Data Collection

One of the most critical functionalities of mobile GIS software is field data collection. Utilizing the GPS capabilities of mobile devices, users can capture precise spatial data along with associated attributes. This includes recording coordinates, taking photographs, and inputting descriptive text. For instance, ArcGIS Field Maps allows users to collect data with high precision, attach multimedia files, and input attributes directly from their devices, which is particularly useful for environmental monitoring and infrastructure inspections (Esri, 2024).

Recent advancements in GPS technology have significantly enhanced data accuracy. Modern smartphones with high-precision GPS receivers can achieve location accuracy within a few centimeters, improving the reliability of spatial data collected in the field (Li et al., 2022). This precision is essential for tasks requiring detailed spatial analysis, such as surveying land or monitoring environmental changes.

2.2 Enhanced Mobility for Map Visualization

Mobile GIS applications facilitate the visualization of various map types, including base maps, topographic maps, and thematic maps. Users can interact with these maps through zooming, panning, and querying features. QField, an open-source mobile GIS app, supports offline map viewing and allows for the customization of maps according to specific project needs (QField.org, 2024). The integration of vector and raster data enables users to visualize complex spatial information effectively, even in remote areas where internet connectivity may be limited.

Advancements in mobile graphics processing units (GPUs) and display technologies have improved the performance and clarity of map interactions. Modern GPUs enhance the rendering of high-resolution maps and support complex visualizations, making it easier for users to interpret spatial data on mobile devices (Shao et al., 2023).

2.3 Streamlined Spatial Analysis

Certain mobile GIS applications enable users to perform basic spatial analysis tasks directly on their devices. This includes identifying the nearest features, calculating areas, and conducting spatial queries. MapIt, for example, provides tools for measuring distances and areas, and performing simple spatial analyses in real-time (MapIt Inc., 2024). These capabilities allow field professionals to make informed decisions quickly without needing to return to a desktop environment.

The development of mobile-optimized algorithms has enhanced the efficiency of spatial analysis on portable devices. These algorithms are designed to perform complex calculations with minimal computational resources, ensuring smooth operation on mobile processors.

3. Software Examples and Integration

3.1 ArcGIS

ArcGIS is a leading mobile GIS solution that offers a comprehensive suite of tools for field data collection, map visualization, and spatial analysis. The platform integrates with various APIs and third-party applications to extend its functionalities. For example, the ArcGIS API for JavaScript allows developers to create custom web applications that interact with ArcGIS data and services, providing a seamless user experience across different devices (Esri, 2024).

ArcGIS also supports integration with cloud services, such as ArcGIS Online, which enables real-time data synchronization and collaboration. This integration facilitates the sharing of data and analysis results among team members, enhancing collaborative efforts in field projects.

3.2 QField

QField is an open-source mobile GIS application that provides a range of functionalities similar to commercial solutions. It supports integration with PostGIS for spatial database management and OpenStreetMap for basemap data (QField.org, 2024). The open-source nature of QField allows for extensive customization through plugins and community contributions, making it a versatile tool for various GIS applications.

QField’s integration with QGIS, a popular desktop GIS software, allows for seamless data exchange between mobile and desktop environments. Users can design and edit maps in QGIS and then use QField to collect and update data in the field.

3.3 MapIt

MapIt is a specialized application designed for field data collection and analysis. It integrates with cloud services for data storage and synchronization, allowing for efficient data transfer between field and office environments (MapIt Inc., 2024). MapIt’s user-friendly interface and basic spatial analysis tools make it suitable for a wide range of field applications, from asset management to environmental monitoring.

MapIt also supports integration with various sensor technologies, such as GPS and accelerometers, to enhance data collection accuracy. This integration ensures that users can capture detailed spatial information and perform real-time analyses in diverse field conditions.

4. Integration of Advanced Technologies in Mobile GIS

Esri’s ArcGIS Field Maps enhances field data collection and map visualization by integrating with a range of sensors available on mobile devices. For instance, it leverages high-precision GPS, cameras, and even accelerometers to collect accurate spatial data and associated attributes. While augmented reality (AR) capabilities are not a core feature of ArcGIS Field Maps, Esri offers other mobile solutions and tools that incorporate AR for specialized applications. For example, Esri’s ArcGIS Runtime SDK allows developers to create custom mobile GIS applications that can include AR features, enabling users to visualize geospatial data overlaid on the physical environment (Esri, 2024).

Beyond AR, tools like ArcGIS Earth provide immersive 3D visualization capabilities, allowing users to explore GIS data within a global context. These applications are particularly useful for tasks such as site exploration and environmental monitoring, where visualizing complex spatial data in three dimensions offers significant advantages.

Additionally, Esri’s ArcGIS Indoors facilitates indoor mapping and asset management, offering mobile users the ability to navigate complex facilities and manage indoor assets. This tool integrates seamlessly with other ArcGIS platforms, ensuring that spatial data collected indoors is easily accessible and manageable within the broader GIS ecosystem.

5. Future Directions

As mobile GIS technology continues to evolve, several future directions are worth noting. The integration of artificial intelligence (AI) and machine learning (ML) algorithms into mobile GIS applications is expected to enhance data analysis capabilities. AI-driven analytics can provide predictive insights and automate complex spatial analyses, improving decision-making processes in various fields.

Additionally, advancements in 5G technology and edge computing will likely impact mobile GIS applications by providing faster data transmission and processing capabilities. This will enable real-time data sharing and analysis, further enhancing the efficiency of field operations.

6. Conclusion

Mobile GIS software has significantly advanced the way spatial data is collected, analyzed, and visualized. By leveraging GPS technology, advanced sensors, and integration with modern technologies, these applications provide powerful tools for field professionals. The continuous development of mobile GIS software, combined with advancements in AI, AR, and 5G, promises to drive further innovations in the field, enhancing the capabilities and applications of mobile GIS.

References

  • Cheng, X., Wang, C., & Zhang, L. (2024). Advances in Mobile GIS Technology: Sensors and Data Integration. Journal of Spatial Science, 29(3), 45-62.
  • Esri. (2024). ArcGIS Field Maps. Retrieved from https://www.esri.com/en-us/arcgis/products/arcgis-field-maps/overview
  • Esri. (2024). ArcGIS Runtime SDK. Retrieved from https://developers.arcgis.com/arcgis-runtime/
  • Esri. (2024). ArcGIS Indoors. Retrieved from https://www.esri.com/en-us/arcgis/products/arcgis-indoors/overview
  • Li, J., Zhang, Y., & Chen, L. (2022). GPS Accuracy Improvements and Implications for Mobile GIS. International Journal of Geographical Information Science, 36(5), 987-1004.
  • MapIt Inc. (2024). MapIt Field Data Collection Application. Retrieved from https://mapitgis.com
  • QField.org. (2024). QField for QGIS. Retrieved from https://qfield.org/
  • Shao, Q., Liu, J., & Yang, X. (2023). Enhancements in Mobile Graphics Processing for GIS Applications. Computers, Environment and Urban Systems, 88, 101-115.
  • Zhao, S., Li, H., & Liu, Y. (2023). Mobile GIS: Current Trends and Future Directions. Transactions in GIS, 27(4), 567-586.