Fortinet udostępnił aktualizację dla produktu FortiSIEM o oznaczeniu wersji 6.7.7. Przeprowadzono aktualizację wersji Rocky Linux OS do wersji 8.8 oraz dokonano aktualizacji repozytoriów FortiSIEM dla Rocky Linux do 19 czerwca 2023. W tej wersji dodano także funkcjonalność obowiązkowego wylogowania użytkownika z interfejsu GUI po określonym czasie nieaktywności. Dodatkowo, rozwiązano szereg mniejszych błędów związanych z API, parserem i innymi komponentami, co przyczyniło się do poprawy wydajności i stabilności systemu.
Usprawnienia rozwiązania:
- Aktualizacja Rocky Linux OS do wersji 8.8 oraz uwzględnienie opublikowanych aktualizacji Rocky Linux OS do 19 czerwca 2023. Obejmuje to wszystkie usprawnienia i poprawki w systemie operacyjnym Rocky Linux.
- Aktualizacja repozytoriów FortiSIEM dla Rocky Linux (os-pkgs-cdn.fortisiem.fortinet.com i os-pkgs-r8.fortisiem.fortinet.com) do 19 czerwca 2023. To pozwala klientom korzystającym z wersji 6.4.1 i nowszych na aktualizację systemu operacyjnego Rocky Linux, postępując zgodnie z procedurą opisaną w FortiSIEM OS Update Procedure.
- Wprowadzenie wdrażania obowiązkowego czasu nieaktywności GUI. Czas nieaktywności GUI można teraz określić w zakładce CMDB > Użytkownik > Idle Timeout. Ta funkcjonalność poprawnie działa w tej wersji. Jeśli użytkownik nie przesuwa myszy ani nie naciska klawiszy i nie znajduje się na tablicy rozdzielczej (Dashboard), zostaje automatycznie wylogowany po upływie określonego czasu nieaktywności.
Rozwiązane problemy:
Bug Id | Severity | Module | Description |
---|---|---|---|
929885 | Major | App Server | Test Connectivity & Discovery may get stuck with Database update 0% when a few discoveries are running. |
922978 | Major | Report | ReportWorker on EventDB environments may be slow in processing events and sending to ReportMaster. |
924887 | Minor | App Server | Two public API issues are resolved: (a) The Triggering event API (/rest/pub/incident/triggeringEvent ) returns 404 error, instead of 200, when there are no triggering events, and (b) The Incident API (GET/POST /phoenix/rest/pub/incident ) does not return data when there are exactly 2 pages. |
923024 | Minor | App Server | In GUI, switching user from Super Global to a specific Organization does not work unless the user belongs to all Organizations. |
898661 | Minor | App Server | A Rule exception condition is not saved properly if the Filter Condition and the Time Condition are added in separate steps. For example, if you add the Filter Condition, then Save and then add the Time Condition and Save again, then the format of the Saved Time condition is incorrect. This causes the Rule exception condition to be ineffective. |
921451 | Minor | Event Pulling Agents | Azure for US Govt does not work – (fails with correct credential). |
928414 | Minor | Parser | phParser CPU may be high if event size is very large. |
918150 | Minor | System | Upgrade can fail when Rocky Linux OS repo DNS Name resolution fails. |
Znane problemy:
- ClickHouse Related
- Elasticsearch Related
- EventDB Related
- HDFS Related
- High Availability Related
ClickHouse Related
- If you are running ClickHouse event database and want to do Active-Active Supervisor failover, then your Supervisor should not be the only ClickHouse Keeper node. In that case, once the Supervisor is down, the ClickHouse cluster will be down and inserts will fail. It is recommended that you have 3 ClickHouse Keeper nodes running on Workers.
- If you are running ClickHouse, then during a Supervisor upgrade to FortiSIEM 6.7.0 or later, instead of shutting down Worker nodes, you need to stop the backend processes by running the following command from the command line.
phtools --stop all
- If you are running Elasticsearch or FortiSIEM EventDB and switch to ClickHouse, then you need to follow two steps to complete the database switch.
- Set up the disks on each node in ADMIN > Setup> Storage and ADMIN > License > Nodes.
- Configure ClickHouse topology in ADMIN > Settings > Database > ClickHouse Config.
- In a ClickHouse environment, Queries will not return results if none of the query nodes within a shard are reachable from Supervisor and responsive. In other words, if at least 1 query node in every shard is healthy and responds to queries, then query results will be returned. To avoid this condition, make sure all Query Worker nodes are healthy.
Elasticsearch Related
- In Elasticsearch based deployments, queries containing „IN Group X” are handled using Elastic Terms Query. By default, the maximum number of terms that can be used in a Terms Query is set to 65,536. If a Group contains more than 65,536 entries, the query will fail.
The workaround is to change the “max_terms_count” setting for each event index. FortiSIEM has been tested up to 1 million entries. The query response time will be proportional to the size of the group.
Case 1. For already existing indices, issue the REST API call to update the setting
PUT fortisiem-event-*/_settings { "index" : { "max_terms_count" : "1000000" } }
Case 2. For new indices that are going to be created in the future, update fortisiem-event-template so those new indices will have a higher max_terms_count setting
cd /opt/phoenix/config/elastic/7.7
- Add
"index.max_terms_count": 1000000
(including quotations) to the “settings” section of thefortisiem-event-template
.Example:
…
"settings": { "index.max_terms_count": 1000000,
…
- Navigate to ADMIN > Storage > Online and perform Test and Deploy.
- Test new indices have the updated terms limit by executing the following simple REST API call.
GET fortisiem-event-*/_settings
- FortiSIEM uses dynamic mapping for Keyword fields to save Cluster state. Elasticsearch needs to encounter some events containing these fields before it can determine their type. For this reason, queries containing
group by
on any of these fields will fail if Elasticsearch has not seen any event containing these fields. Workaround is to first run a non-group by query with these fields to make sure that these fields have non-null haves.
EventDB Related
Currently, Policy based retention for EventDB does not cover two event categories: (a) System events with phCustId = 0, e.g. a FortiSIEM External Integration Error, FortiSIEM process crash etc., and (b) Super/Global customer audit events with phCustId = 3, e.g. audit log generated from a Super/Global user running an adhoc query. These events are purged when disk usage reaches high watermark.
HDFS Related
If you are running real-time Archive with HDFS, and have added Workers after the real-time Archive has been configured, then you will need to perform a Test and Deploy for HDFS Archive again from the GUI. This will enable HDFSMgr
to know about the newly added Workers.
High Availability Related
If you make changes to the following files on any node in the FortiSIEM Cluster, then you will have to manually copy these changes to other nodes.
- FortiSIEM Config file (
/opt/phoenix/config/phoenix_config.txt
): If you change a Supervisor (respectively Worker, Collector) related change in this file, then the modified file should be copied to all Supervisors (respectively Workers, Collectors). - FortiSIEM Identity and Location Configuration file (
/opt/phoenix/config/identity_Def.xml
): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers. - FortiSIEM Profile file (
ProfileReports.xml
): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers. - SSL Certificate (
/etc/httpd/conf.d/ssl.conf
): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers. - Java SSL Certificates (files
cacerts.jks
,keyfile
andkeystore.jks
under/opt/glassfish/domains/domain1/config/
): If you change these files on a Supervisor, then you have to copy these files to all Supervisors. - Log pulling External Certificates: Copy all log pulling external certificates to each Supervisor.
- Event forwarding Certificates define in FortiSIEM Config file (
/opt/phoenix/config/phoenix_config.txt
): If you change on one node, you need to change on all nodes. - Custom cron job: If you change this file on a Supervisor, then you have to copy this file to all Supervisors.
Notatki producenta: FortiSIEM 6.7.7
Pozdrawiamy,
Zespół B&B
Bezpieczeństwo w biznesie